Tag Archives: facebook

Facebook ads can be used to gauge cultural similarity between countries

The cultural similarity between countries and international migration patterns can be measured quite reliably using Facebook data, a new study reports.

Image via Pixabay.

“Cultural hotspot” isn’t the first thing that pops into mind when thinking about social media for most of us. However, new research from the Max Planck Institute for Demographic Research in Rostock, Germany shows that data from Facebook can be used to gauge cultural closeness between countries, and overall migration trends.

And the way to do it is to track ads for food and drink on the platform.

We are what we eat

“[A] few years ago, after reading a work of a colleague using data from the Facebook Advertising Platform, I was surprised to find how much information we share online and how much these social media platforms know about us,” said Carolina Coimbra Vieira, a Ph.D. student in the Laboratory of Digital and Computational Demography at the Max Planck institute and lead author of the research, in an email for ZME Science.

“After that, I decided to work with this social media data to propose new ways of answering old questions related to society. In this specific case, I wanted to propose a measure of cultural similarity between countries using data regarding Facebook users’ food and drink preferences.”

For the study, the team developed a new approach that uses Facebook data to gauge cultural similarity between countries, by making associations between immigration patterns and the overall preference for food and drink across various locations.

They employed this approach as migrants have a very important role to play in shaping cultural similarities between countries. However, they explain, it’s hard to study their influence directly, in part because it is hard to ‘measure’ culture reliably. The traditional way of gauging culture comes in the form of surveys, but these have several drawbacks such as cost, the chances of bias in question construction, and difficulties in applying them to a large sample of countries.

The team chose to draw on previous findings that show food and drink preferences may be a proxy for cultural similarities between countries, and build a new analytical method based on this knowledge. They drew on Facebook’s top 50 food and drink preferences in various countries — as captured by the Facebook Advertising Platform — in order to see what people in different areas liked to dine on.

“This platform allows marketers and researchers to obtain an estimate of the number of Facebook monthly active users for a proposed advertisement that matches the given input criteria based on a list of demographic attributes, such as age, gender, home location, and interests, that can be customized by the advertiser,” Vieira explained for ZME Science. “Because we focus on food and drink as cultural markers, we selected the interests classified by Facebook as related to food and drink. We selected the top 50 most popular foods and drinks in each one of the sixteen countries we analyzed to construct a vector indicator of each country in terms of these foods and drinks to finally measure the cultural similarity between them.”

In order to validate their findings, the team applied the method to 16 countries. They report that food and drink interests, as reflected by Facebook ads, generally align with documented immigration patterns. Preferences for foreign food and drink align with domestic preferences in the countries from which most immigrants came. On the other hand, countries that tend to have few immigrants also showed lower preferences for foreign foods and drinks, and were interested in a narrower range of such products more consistently.

The team cites the example of the asymmetry between Mexico and the U.S. as an example of the validity of their model. The top 50 foods and drinks from Mexico are more popular in the U.S. than the top 50 U.S. foods and drinks are in Mexico, they explain, aligning well with the greater degree of immigration coming from Mexico into the U.S. than the other way around.

All in all, the findings strongly suggest that immigrants help shape the culture of various countries. In the future, the team hopes to expand their methodology to include other areas of preference beyond food and drink, and see whether these align with known immigration patterns.

“The food and drink preferences shared by Facebook users from two different countries might indicate a high immigrant population from one country living in the other. In our results we observed that immigration is associated with higher cultural similarity between countries. For example, there are a lot of immigrants from Argentina living in Spain and our measure showed that one of the most similar countries to Spain is Argentina. This means that foods and drinks popular between Facebook users in Argentina are also really popular in Spain,” she adds.

“The most surprising aspect of this study is the methodology and more precisely, the data we used to study culture. Differently from surveys, our methodology is timely, [cost-effective], and easily scalable because it uses passively-collected information internationally available on Facebook.”

Overall, the researchers say, this study suggests that immigrants indeed help shape the culture of their destination country. Future research could refine the new method outlined in this study or repurpose it to examine and compare other interests beyond food and drink.

“I would like to see our proposed measure of cultural similarity being used in different contexts, such as to predict migration. For instance, it would be interesting to use our measure of cultural similarity to answer the question: Do the migrants prefer to migrate to a country culturally similar to their origin country?” Vieira concludes in her email.”More generally, I hope our work contributes to increasing the development of research using social media data as an alternative to complement more traditional data sources to study society.”

The paper “The interplay of migration and cultural similarity between countries: Evidence from Facebook data on food and drink interests” has been published in the journal PLoS ONE.

Facebook is becoming a hotbed for climate denial disinformation

Anti-climate groups are using Facebook to sow doubt and confusion around climate science, a new report showed. A set of ads falsely denying climate change or the need to take action were viewed by at least eight million people on the first half of the year.

Credit Flickr Stock Catalog

In September, Facebook launched a Climate Science Information Center and said to be committed to “tackling climate misinformation” through its fact-checking program. But this doesn’t seem to be enough.

Influence Map, an independent think tank that provides data and analysis on how business and finance are affecting the climate crisis, identified 51 climate disinformation ads, paid by conservative groups.

The social media network uses some fact-checkers to ban false advertising but this isn’t meant to “interfere with individual expression, opinions and debate,” and it’s not clear if Facebook is deploying sufficient manpower for this. Facebook’s is likely to allow some forms of climate disinformation to be exempt from fact-checking, the report argued. Of the 51 ads identified, only one was taken down.

Massachusetts senator Elizabeth Warren said in a statement: “The devastating report reveals how Facebook lets climate deniers spread dangerous junk to millions of people. We have repeatedly asked Facebook to close the loopholes that allow misinformation to run rampant on its platform. Facebook must be held accountable.”

The money came conservative groups

According to Facebook’s Ad Library, there are currently 250,000 Facebook pages in the US that use paid-for-ads to promote political messages. Using a list of 95 advertisers known to have promoted climate disinformation, InfluenceMap identified 51 climate disinformation ads in the US across a six-month period starting in January. The organization then looked to see where the money was coming from.

The ads were paid by conservative groups with opaque funding. These include non-profits such as Prager U, The Mackinac Center for Public Policy, Texas Public Policy Foundation and the Competitive Enterprise Institute, among others. Collectively, the groups identified in the report as using Facebook advertising to spread climate disinformation have a total revenue of $68 million per year.

The most common strategy by them is to attack the credibility of climate science and climate science communicators, frequently targeting the United Nation’s Intergovernmental Panel on Climate Change (IPCC). Arguments used include denying there’s a widespread consensus on climate change and suggesting there’s a high level of uncertainty.

The report showed the ads were heavily distributed in rural US states and to males over the age of 55. Regarding the geographic spread, the largest intensity of impressions per person was found in Texas and Wyoming.

Additionally, the climate disinformation ads were more distributed to males than to females across all age groups. Dylan Tanner, of InfluenceMap concludedt:

“[Climate disinformation adverts] will be of concern to advertisers like Unilever and others who are clearly concerned about climate, both from the viewpoint of the company’s risk and also being on the same platform as these ads.”

Facebook connections can predict how COVID-19 spreads

The strength and density of Facebook connections between two geographical regions can predict COVID-19 outbreaks.
Credit: Pixabay.

The coronavirus mainly spreads through direct person-to-person contact and it does a very good job at it, too, being roughly twice as contagious as the flu. So, this is a virus that is highly adapted to human nature, exploiting our propensity to engage in social activities.

Due to the way it is designed by its architects, Facebook does a very good job mirroring our real-life connections with people. Now, scientists have tapped into Facebook’s datasets to develop a model that can accurately predict the spread of COVID-19.

How your Facebook friends can predict which regions will be the most affected by COVID-19

Researchers at New York University used a new anonymized dataset from Facebook called the Social Connectedness Index. In the wake of the Cambridge Analytica scandal, the biggest social network in the world has tried to clear its name by releasing all sorts of initiatives and programs in an effort to both tighten user privacy while also being more transparent on how it processes user data.

This dataset allowed the researchers to measure how connected two geographical regions are based on the friendships on Facebook. The dataset doesn’t offer raw data that could be used for more granular mining so as to not infringe on user privacy. However, this is enough to plot the density of social connections.

Using such an approach, the researchers assessed COVID-19 transmission in the early days of the epidemic in two important hotspots: Westchester County (a New York suburb) and Lodi province in the north of Italy.

In the New York suburb, the researchers found that coastal regions and urban centers had high levels of COVID-19 cases per capita, as well as high levels of connectedness to Westchester-based Facebook users. For Lodi, Facebook connections mirrored the spread of coronavirus to Rimini, a popular seaside resort on the Adriatic.

There were also associations that connected Facebook data to COVID-19 cases spreading between Lodi and several provinces in southern Italy, which are historically known for workers and students migrating to the richer Lombardy region in northern Italy.

These findings held true when wealth, population density, and geographical proximity were factored in.

“These results suggest that data from online social networks may prove useful to epidemiologists and others hoping to forecast the spread of communicable diseases such as COVID-19,” the authors wrote in their study.

At this moment, half of the world’s population is under lockdown. But this can’t last forever. At some point, social distancing restrictions will be loosened — and this will happen intermittently up to at least 2022, according to a recent study that ZME Science covered today.

This sort of epidemiological modeling based on social network connections might prove incredibly useful during the crucial moments in between lockdowns. For instance, the modeling could inform policymakers on which regions are the most vulnerable to COVID-19 transmission allowing them to take precautionary steps.

Elsewhere, governments are turning to other types of user data, such as aggregated data derived from telecommunication towers or browsing history from the mobile advertising industry.

Tech experts band together to issue $10 million challenge against deepfakes

A newly-announced challenge is offering a total of $10 million in prizes to those who can create reliable deepfake-spotting software.

Image via Pixabay.

It’s a good approach in life not to believe everything you see — but it’s a vital skill on today’s Internet. In response to the dangers posed by deepfakes, realistic AI-generated videos of people doing and saying fictional things, a group of technology firms and academics have banded together to find a solution.

The group, which also includes Facebook, announced Tuesday that they’re launching a public race to develop technology for detecting deepfakes.

Fake videos, real prizes

“The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer,” said Facebook chief technical officer Mike Schroepfer.

In total, Facebook is dedicating $10 million to the program. The challenge, called the Deepfake Detection Challenge (DFDC), will have a leaderboard and prizes, which will be given out “to spur the industry to create new ways of detecting and preventing media manipulated via AI from being used to mislead others,” Schroepfer explains.

Microsoft and the Partnership on AI have also thrown their weight behind the initiative. Partnership on AI is an industry-backed group whose mission is to promote beneficial uses of artificial intelligence. It includes members from the Massachusetts Institute of Technology, Cornell University, University of Oxford, University of California-Berkeley, University of Maryland, and University at Albany. It’s backed by Apple, Amazon, IBM and other tech firms and non-governmental organizations

All in all, the DFDC is likely the single most significant move ever taken against the dissemination of altered video and audio material intended to misinform public discourse. It’s also the first project on the subject of media integrity started by Partnership on AI.

Deepfakes “have significant, global implications for the legitimacy of information online, the quality of public discourse, the safeguarding of human rights and civil liberties, and the health of democratic institutions,” explains the executive director of the Partnership on AI, Terah Lyons.

Facebook said the funds it put up for grabs will go towards research collaborations and prizes for the challenge. Facebook itself will also enter the competition, but not accept any of the prize money. According to the DFDC website, the challenge will run throughout 2020. A winner will be selected using “a test mechanism that enables teams to score the effectiveness of their models, against one or more black-box test sets from our founding partners.”

Facebook posts can be used to predict anxiety, depression, and even diabetes

A new study finds that Facebook posts are better than demographic information when it comes to predicting a number of mental health conditions, as well as diabetes. This suggests that one day, our social media history might play an important role in the doctor’s office.

You can tell a lot of things by a person’s social media history, but medical information isn’t exactly one of them, but this is exactly what was presented in a new study. In the research, the team analyzed the entire Facebook post history of around 1,000 patients (who had given their consent for this study), building three analysis models: one that looked at post language, one that looked at demographics, and one that combined the two.

They then looked at 21 different medical conditions, assessing whether the Facebook history could be used to predict these conditions — all 21 were. Actually, 10 of them were predictable from post history alone, without even looking at the demographic information. It’s still early, but the results were impressive.

“This work is early, but our hope is that the insights gleaned from these posts could be used to better inform patients and providers about their health,” said lead author Raina Merchant, MD, MS, the director of Penn Medicine’s Center for Digital Health and an associate professor of Emergency Medicine. “As social media posts are often about someone’s lifestyle choices and experiences or how they’re feeling, this information could provide additional information about disease management and exacerbation.”

The language we use carries many unconscious biases which can be linked to our behaviors and habits. In turn, these behaviors can also be indicative of other underlying problems. Some connections were clear: people who tended to use words like “bottle” or “drink” more often were more likely to suffer from alcohol abuse. Others, however, were much less intuitive.

For instance, the people that most often used religious language (with words such as “God” or “pray”) were 15 times more likely to have diabetes than those who used these terms the least. When fed into the models, this information could be used and extrapolated to predict serious conditions.

“Our digital language captures powerful aspects of our lives that are likely quite different from what is captured through traditional medical data,” said study author Andrew Schwartz, PhD, visiting assistant professor at Penn in Computer and Information Science, and an assistant professor of Computer Science at Stony Brook University. “Many studies have now shown a link between language patterns and specific disease, such as language predictive of depression or language that gives insights into whether someone is living with cancer. However, by looking across many medical conditions, we get a view of how conditions relate to each other, which can enable new applications of AI for medicine.”

The thing is, because the content we publish on Facebook is not in a medical context, it can contain information that’s usually not mentioned in a medical or clinical context, including potential markers for specific diseases. For depression, words like “pain,” “crying,” or “tears” were good indicators, but also less obvious words such as “stomach,” “head,” or “hurt”.

It’s not the first time this idea was suggested. Previous research has found that Facebook history can be indicative of mental health conditions such as depression. The fact that this approach can be further extended to conditions such as diabetes is even more encouraging.

Now, the team is carrying a larger trial where they will ask participants to share social media history with their doctor to see how this data can be best used in a practical setting. This is the one big caveat to this study: the sample size. Not only was it fairly small, but it was also largely female (76%) and African American (71%) — not representative for the entire population.

Journal Reference: Merchant et al. Evaluating the predictability of medical conditions from social media posts. PLOS ONE. DOI:10.1371/journal.pone.0215476

Ridiculous DeepFake video of Mark Zuckerberg stretches Facebook’s fake news policies to the limit

You’re browsing through Facebook’s newsfeed when you come across a recording of Mark Zuckerberg, none other than the social network’s founder, giving an outrageous speech on national television. “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures,” Zuckerberg says in the video. “I owe it all to Spectre. Spectre showed me that whoever controls the data, controls the future.”

https://www.instagram.com/p/ByaVigGFP2U/?utm_source=ig_embed

Except it wasn’t Zuckerberg that said any of that. It’s all a hoax. It’s part of a recent genre of AI-driven technology called ‘deepfakes’. This particular video, which was uploaded to Instagram, was produced by a partnership between a pair of artists and the advertising company Canny. While meant as a demonstration, similar deepfakes can be a lot more sinister and damaging.

Recently, a deepfake video showing House Speaker Nancy Pelosi made rounds on social media causing an uproar. Instead of removing the fake video, Facebook chose to de-prioritize it, meaning it showed up less frequently in users’ feeds. It also showed third-party fact-checker information just like you’d see in a fake news story shared over Facebook.

But will the social network take a more radical stand once it sees that its brand can be directly affected by such impersonations? During the time when the Pelosi fake surfaced on the platform, Neil Potts, Facebook’s director of public policy, said that this would make no difference. Now that this hypothetical has turned into reality, it remains to be seen how Facebook will react.

Deepfake is seriously creepy as well as dangerous. It’s been used to attribute fake content to politicians like Barrack Obama and Vladimir Putin, or to swap Nicolas Cage’s face with those of characters from famous movies where he never stared in. Similar tech was used to swap the faces of celebrities in porn.

It’s been getting ridiculously easy to create these, too. Previously, researchers made deepfakes starting from a single image or painting, thus bringing to life portraits of Einstein, Dali, and even the Mona Lisa. Elsewhere, a recent collaboration between Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research wrote a new software that uses machine learning to let users edit the text transcript of a video to add, delete, or change the words coming right out of somebody’s mouth.

The Zuckerberg video was created like just about any other deepfake. It employed an algorithm developed at the University of Washington by the same people that made the Obama fake videos. Canny also sprinkled in some code inspired by Stanford’s Face2Face program that enables real-time facial matching. The algorithm was then trained on short scenes featuring the target face lasting no more than 45 seconds. A voice actor’s recording was then used to reconstruct frames in the fake video showing Zuckerberg making statements he never actually voiced.

Canny used the same software to make similar deepfakes of Kim Kardashian and Donald Trump, which were showcased at Spectre, an exhibition that took place as part of the Sheffield Doc Fest in the UK.

https://www.instagram.com/p/ByPhCKuF22h/

https://www.instagram.com/p/ByKg-uKlP4C/?utm_source=ig_embed

If you pay close attention, it’s relatively easy to spot these as fakes. The voices are particularly unnatural and synthetic, but with some improvements, they could become indistinguishable from those of real people. Just a few weeks ago, someone made an AI that sounds just like Joe Rogan. Seriously, listen to the production and be prepared for one heck of a trip.

In the years to come, deepfakes will only get better and easier to make. If you thought fake news was bad, wait until you see this ungodly technology released into the wild.

Day of the Dead.

Facebook might have more dead users than alive by 2100

A new study from the Oxford Internet Institute (OII), part of the University of Oxford, estimates that Facebook will have more deceased users accounts than living ones in roughly fifty years’ time.

Day of the Dead.

Image via Pixabay.

How should a social media platform handle the accounts of those departed? It doesn’t sound like a very pressing issue but, based on the results of the new analysis, it’s one that we will have to face sooner rather than later. The team writes that, based on 2018 user levels, at least 1.4 billion Facebook members will die before 2100.

The Night King’s digital army

“These statistics give rise to new and difficult questions around who has the right to all this data, how should it be managed in the best interests of the families and friends of the deceased and its use by future historians to understand the past,” said lead author Carl Öhman, a doctoral candidate at the OII.

If the prediction is accurate, this would mean that the number of accounts created by the deceased will outnumber those of living people by 2070. If the current rate at which the platform expands continues unabated, the authors go on to explain, the number of deceased users could reach as many as 4.9 billion before the end of the century.

This is a trend that we, as a society, have never had to contend with until now — one that’s bound to have grave implications for how we treat our digital heritage in the future.

“On a societal level, we have just begun asking these questions and we have a long way to go,” Öhman adds. “The management of our digital remains will eventually affect everyone who uses social media, since all of us will one day pass away and leave our data behind.”

“But the totality of the deceased user profiles also amounts to something larger than the sum of its parts. It is, or will at least become, part of our global digital heritage.”

Co-author David Watson, also a DPhil student at the OII, says that the social platform, in essence, amounts to an immense archive of human behavior and culture. So, in a way, those who control what happens to it will “control our history”. Watson cautions that it’s therefore very important to ensure we don’t limit access to this historical data to a single for-profit firm. “It is also important to make sure that future generations can use our digital heritage to understand their history,” he adds.

The predictions are based on data from the United Nations, which provide the expected number of mortalities and total populations for every country in the world distributed by age. Facebook-specific data was scraped from the company’s Audience Insights feature. While the study notes that this self-reported dataset has several limitations, this provides the most comprehensive publicly available estimate of the network’s size and distribution.

The study sets up two potential extreme scenarios, arguing that the platform’s future evolution will likely fall somewhere in between them:

  • The first scenario assumes that no new users join the platform after 2018. In this case, Asia’s share of deceased users will increase rapidly, and will eventually account for some 44% of the total number of such accounts by 2100. Roughly half of those accounts will be owned by individuals from India and Indonesia, which together account for just under 279 million Facebook mortalities by 2100.
  • For the second scenario, the team assumed that Facebook will continue to expand by its current rate of 13% per year until reaching market saturation (i.e. there are no new users to join). In this case, Africa will also take up an important slice of the total number of dead users. Nigeria, in particular, takes the lead, accounting for over 6% of the total figure. Western users will account for only a minority of users, with only the US making the top 10.

“The results should be interpreted not as a prediction of the future, but as a commentary on the current development, and an opportunity to shape what future we are headed towards,” explains Öhman.

“But this has no bearing on our larger point that critical discussion of online death and its macroscopic implications is urgently needed. Facebook is merely an example of what awaits any platform with similar connectivity and global reach.”

Watson says that Facebook should consult with historians, archivists, archaeologists, and ethicists to curate the vast amount of data left behind when someone passes away.

“This is not just about finding solutions that will be sustainable for the next couple of years, but possibly for many decades ahead.”

The paper “Are the dead taking over Facebook? A Big Data approach to the future of death online” has been published in the journal Big Data & Society.

Facebook to start limiting the spread of vaccine misinformation

“Facebook has conspired with the government and big pharma to push their pro-vaccine agenda upon us” — some guy in the comment section, probably.

According to a recent announcement, Facebook has decided that pages spreading misinformation and “verifiable vaccine hoaxes” need to take a backseat. The social media giant has announced a new strategy to reduce the prominence of pages and groups spreading misinformation without banning them outright. The first move is to start taking action against verifiable hoaxes.

“Leading global health organizations, such as the World Health Organization and the US Centers for Disease Control and Prevention, have publicly identified verifiable vaccine hoaxes. If these vaccine hoaxes appear on Facebook, we will take action against them,” Monika Bickert, VP of global policy management at Facebook, wrote in a public announcement.

For instance, the ranking and findability of pages which promote such hoaxes will be reduced substantially. These pages will also no longer appear as page recommendations or in predictions when you type in the “Search2 bar. Ads containing any such hoaxes will also be banned.

Facebook also said they will provide “additional context” regarding these pages, although the company did not say exactly how this will happen.

“We also believe in providing people with additional context so they can decide whether to read, share, or engage in conversations about information they see on Facebook. We are exploring ways to give people more accurate information from expert organizations about vaccines at the top of results for related searches, on Pages discussing the topic, and on invitations to join groups about the topic. We will have an update on this soon.”

Why this matters

Between 1 February 2017 and 31 January 2018, 14 732 cases of measles were reported in Europe alone, with 57 fatalities since 2016.

Facebook is only the latest company to announce proactive measures to curb misinformation. The topic of vaccination is particularly important as in recent years, preventable diseases have made a worrying comeback, especially on the back of anti-vaxx misinformation. This misinformation is often more visible than actual scientific information, and social media (with Facebook and Youtube at the forefront) have helped spread that misinformation.

Youtube is starting to roll manual fact-checking for “verified hoaxes” and has demonetized channels based on anti-vaccine misinformation, while Pinterest has taken a firmer stance, blocking ‘vaccination’ searches to halt misinformation. These are still baby steps in solving a much wider global problem — but these baby steps are more than welcome.

Apps aimed at kids are a sponge of personal data, in direct violation of federal law, study reports

Thousands of apps targeted at children are silently and unlawfully gathering their data, study finds.

Kids apps.

Peekaboo, they see you. Image credits: Thomas Quinn.

In the wake of the Facebook / Cambridge Analytica meltdown, people are understandably quite concerned about the heap of data apps have gathered on them, and what happens to this wealth of information. Well, I’m sorry to break it to you, but according to a study published on April 16th, you should be even more concerned.

Hide your kids

Researchers from the International Computer Science Institute say that the majority of free Android apps intended for children are tracking their data — in direct violation of the Children’s Online Privacy Protection Act, or COPPA, a federal law that regulates data collection from users under 13 years of age.

The study analyzed 5,855 apps targeted at children, each gathering an average of 750,000 downloads between November 2016 through to March 2018, according to the paper. These apps, which had over 172 million downloads combined, were games like Fun Kid Racing and Motocross Kids — Winter Storm. Using a Nexus 5X as a platform, the team downloaded and ran each app for about 10 minutes, to simulate a usual session. The results were quite worrying.

Thousands of the apps the team looked at collected data from the device in some way or another, some including location (GPS) data or personal information. Up to 235 of these apps accessed the phone’s GPS data, 184 of which later transmitted this data to advertisers, according to the study. According to Serge Egelman, the paper’s co-author, the findings are bound to worry parents, particularly since they would need an ‘expert’ level of technical knowledge to be able to figure out which apps did this for themselves.

“They’re not expected to reverse-engineer applications in order to make a decision whether or not it’s safe for their kids to use,” he said.

People often give permission for apps to gather ad-tracking data in exchange for free service — we’re all guilty of doing this at one point or another. It isn’t only Android apps that do it, either. For better or for worse, there is a myriad of apps — and most likely a Facebook tracker — peeking at your data all the time.

However, we’re adults, and the right to make our own choices comes with its own risks, including giving away permissions for apps. Children, who aren’t discerning enough to know what consequences their buttoning might have, are given protected legal status through COPPA. Children’s apps are thus not allowed to track data without first gaining explicit parental consent. The study, however, found that many of the apps they analyzed didn’t conform to the law.

Egelman says that even if companies try to ensure they conform to COPPA, the results are still worrying. The simulated interactions were handled by a machine randomly pressing buttons, and most apps still tracked data in one form or another. COPPA requires producers to get “verifiable consent,” meaning that they have to take steps to ensure that people know what information they were releasing to the app.

“If a robot is able to click through their consent screen which resulted in carrying data, obviously a small child that doesn’t know what they’re reading is likely to do the same,” Egelman said.

Back in 2014, Google allowed users to reset their Android Advertising ID to give them better control over how online apps track their data. Developers are required to only use that ID when tracking user data, but the team says two-thirds of the apps they looked at didn’t allow users to reset their ID. Even more glaringly, over 1,000 of the apps also collected personal information in direct violation of Google’s terms of service, which prohibits such tracking in apps targeted towards children.

To add insult to injury, over 40% of the apps further failed to transfer the data in a secure way. Some 2,344 children’s apps transferring collected data did not use TLS encryption, a security standard that makes sure the data and its recipient are authentic. The security measure is the “standard method for securely transmitting information,” the researchers said.

The paper ” “Won’t Somebody Think of the Children?” Examining COPPA Compliance at Scale,” has been published in the journal Proceedings on Privacy Enhancing Technologies.

Credit: Pixabay.

Stressed out? Try briefly quitting Facebook, new study says

Credit: Pixabay.

Credit: Pixabay.

Disheartened by the recent Cambridge Analytica media scandal, many people have aligned with the #deletefacebook movement as a form of protest. But, besides sending a clear message that your privacy matters, foregoing Facebook might actually be for the better — at least for your mental health. According to a recent study carried out by Australian psychologists, even a few days of staying away from the social network lowered cortisol levels, the stress hormone.

The team at the University of Queensland enlisted 138 Facebook users who had used the app daily, aged 18-40, of whom 51 were men and 87 women. The participants were split into two groups: one that took five days off Facebook and the other which continued using the app business as usual.

Saliva tests were taken before and after the study in order to monitor cortisol levels. Almost every cell contains receptors for cortisol and so the hormone can have lots of different actions depending on which sort of cells it acts upon. These effects include controlling the body’s blood sugar levels and thus regulating metabolism, acting as an anti-inflammatory action, influencing memory formation, controlling salt and water balance, influencing blood pressure and fetal development.

Cortisol, which is released by the adrenal glands, is important for helping your body deal with stressful situations, as the brain triggers its release in response to many different kinds of stress. However, when cortisol levels are too high for too long, this hormone can hurt you more than it helps.

Reporting in the Journal of Social Psychology, the team found that those in the ‘no Facebook’ group had lower cortisol levels, and hence felt less stressed. However, participants in the same group also reported lower life satisfaction. Those who continued to use Facebook as usual “reported an increase of their well-being.”

“Our results suggest that the typical Facebook user may occasionally find the large amount of social information available taxing, and Facebook vacations could ameliorate this stress,” the authors note, before adding: “at least in the short-term.”

“It seems that people take a break because they’re too stressed, but return to Facebook whenever they feel unhappy because they have been cut off from their friends,” said study co-author Eric Vanman, a psychologist from the University of Queensland.

“It then becomes stressful again after a while, so they take another break. And so on.”

One big study limitation is the small sample size. It would be a stretch to make generalizations across a social network numbering one billion users. But, even so, the study is intriguing because it shows how social media is permeating our lives — to the point that our life satisfaction depends on how much social media we consume.

Moscow posters.

Facebook turns over 3,000 Russian-bought ads featuring rifles, anti-immigrant messages

On Monday, Facebook handed some 3,000 ads over to congressional investigators, as the tech giant believes they were purchased for Russian propaganda. The ads, as well as the accounts and pages involved, haven’t been made public yet, but their purpose seems to be “to sow discord and chaos, and divide us from one another.”

Moscow posters.

Propaganda posters in Moscow.
Image credits Peggy Lachmann-Anke, Marco Lachmann-Anke.

Most of these ads don’t support a specific candidate, but instead cluster around heated topics in American society with the purpose of fueling debate and division. In particular, these ads bring up issues regarding immigration and race relations. Facebook reports that they were disseminated through multiple pages and profiles, 470 of which have been linked to the Internet Research Agency, or, IRA. IRA is the so-called “troll farm” based in Saint Petersburg, a company which uses fake accounts to engage in online influence and disinformation operations on behalf of the Russian government. IRA is known to have meddled in the 2016 US Presidential campaign in the favor of then-candidate Donald Trump.

Rock bottom and then some

More often than not, these ads appear unassuming, and the propaganda stems from places that seem quite distant from political spheres. The Washington Post recently reported that one of the paid ads showcased a black woman “dry firing” an unloaded rifle. The purpose of this ad is unclear, though it did hit the web amid a period of racial tensions in the US. The New York Times, in turn, traced Russian propaganda back to a variety of groups including a “Defend the 2nd” group “festooned with firearms and tough rhetoric,” a gay rights group named “LGBT United,” even an animal-lover page plastered with pictures of puppies.

CNN reported several ads in support of the Black Lives Matter movement, which appeared to be targeted specifically to Ferguson and Baltimore during the protests. The material itself wasn’t as much about supporting the movement as it was about “portraying the group as threatening to some residents.”

Facebook hasn’t identified which ads were purchased by Russian-based entities thus far, but it also hasn’t prevented from leaking. The Daily Beast, for example, has recovered and reported content from accounts it believes are associated with Russian interests, such as the United Muslims of America. According to the Daily Beast, “Russians impersonated real American Muslims to stir chaos on Facebook and Instagram.”

“These ads are significant to our investigation as they help demonstrate how Russia employed sophisticated measures to push disinformation and propaganda to millions of Americans online during the election, in order to sow discord and chaos, and divide us from one another,” Representative Adam Schiff (D-Calif.), ranking member of the House Intelligence Committee, told ABC News.

These ads make me sick, so I won’t show them here — all the pieces I’ve linked have plenty. They’re equal parts ignorant and infuriating, with strange yellow font and typos peppered throughout. The Committee has gained possession of the ad,s and Schiff hopes to release a “representative sampling” of them to the public.

Facebook said it will strive for greater transparency in the future. Towards this end, they will hire about 1,000 more ad reviewers in the near future, and will request groups running political ads to post copies of these ads publicly.

The first line of defense here isn’t policy, or a company that may or may not have a conflict of interest in limiting ad sales; it’s each and every one of us. These ads are trying to influence your opinion; somebody is paying money to try and change your mind. Don’t give them the satisfaction. Never hate because you’re told to hate. Don’t accept conflict in lieu of cooperation, especially when cooperation looks difficult, even impossible. Don’t allow lines to be drawn for you, separating an arbitrary “us” from a just-as-arbitrary “them”. Just because it’s on the Internet, it doesn’t mean it’s true. Especially when somebody is shelling money at Facebook so you’ll see what they have to say.

Never hate because you’re told to hate. Don’t accept conflict in lieu of cooperation, especially in cases when cooperation looks difficult, or even impossible. Don’t allow lines to be drawn for you, which separate an arbitrary “us” from a just-as-arbitrary “them”. Just because it’s on the Internet doesn’t mean it’s true, especially when someone is shilling money at Facebook so you’ll see what they have to say.

Worthy causes spread by themselves. Propaganda spreads by paying for ads.

Facebook bans “fake news” from advertising

A lie gets halfway around the world before the truth has a chance to get its pants on — and nowhere is that truer than on Facebook.

Post truth has taken the world by storm. We’re dealing with fake news, alternative facts, however you want to name it. Never has information been more readily available in all imaginable forms but like a perverted Garden of Eden, the web of lies creeps at every corner, swamping information and degrading it.

Just take the already classic fake story that Pope Francis endorses Trump (then still a candidate). By November 8, the story had picked up 960,000 Facebook engagements, according to Buzzfeed. Pope Francis had to make a press conference and deny these claims, but that was shared at least ten times less. Basically, the lie prevailed against the truth, and that’s what most people read. It’s not a singular story as well. Fueled greatly by the White House administration, these so-called alternative facts (let’s call them lies, shall we?) have risen to prominence, especially on social media. It took Facebook a bit to adapt to the new context but now, the tech giant is taking some serious steps in fighting fake news.

In a blog post, Facebook said Pages that “repeatedly share stories marked as false” by third-party fact-checkers will be banned from buying ads. Like always, Facebook wasn’t very explicit about what they mean by “repeatedly” or who the third-party fact-checkers will be. Also, the ban is not permanent. Still, while it’s not the toughest approach, it’s understandable that Facebook wants to tread lightly.

The idea of preventing these pages from advertising will likely be quite efficient. Most times, these pages have a website behind them that makes money, so they invest in Facebook advertising with the hopes of making even more money by creating viral, fake stories. This is where this update wants to strike.

“This update will help to reduce the distribution of false news which will keep Pages that spread false news from making money. We’ve found instances of Pages using Facebook ads to build their audiences in order to distribute false news more broadly. Now, if a Page repeatedly shares stories that have been marked as false by third-party fact-checkers, they will no longer be able to buy ads on Facebook. If Pages stop sharing false news, they may be eligible to start running ads again.”

“Today’s update helps to disrupt the economic incentives and curb the spread of false news, which is another step towards building a more informed community on Facebook.”

It remains to be seen whether this approach will be successful or not. Facebook will likely take small, incremental steps and assess how things go before moving on to bigger things. That’s the way the wheel must turn when you have over one billion users.

Facebook has been cracking down on fake stories since last Fall. Facebook users can flag stories as ‘fake’ and these will then be sent to the third-party partners which will fact check them. So far, they’ve made a mild effect on the news sphere. The lies are still there, and people are still buying them.

Perhaps more importantly, we have to change, not Facebook. Too often, we place a great emphasis on social media, buying everything we see there. Often times, we no longer get the news from reputable sources, but just read some random title from a random Facebook page and take it as a given. Simply put, that just won’t do. Read the original source. Do a quick fact check on Google. Use critical thinking, and only share after you’re convinced it’s true. If not for yourself, then at least for your Facebook friends. You are the gatekeeper of their information and you have a responsibility. Facebook must change and it must improve — but at the end of the day, so do we.

Facebook smartphone.

Facebook: where relationship builders, town criers, window shoppers, and selfies come to chat

There are four categories of Facebook personalities, Brigham Young University research reveals.

Facebook smartphone.

Image credits Krzysztof Kamil.

Quick, try to recall the last day you’ve spent without logging into Facebook. Most of you probably can’t. And it’s not that we use the platform daily, but we also spend a lot of time once we’re there. Which begs the question: why do we like it so much?

“What is it about this social-media platform that has taken over the world?” asked lead author Tom Robinson. “Why are people so willing to put their lives on display? Nobody has ever really asked the question, ‘Why do you like this?'”

“Social media is so ingrained in everything we do right now,” Boyle said. “And most people don’t think about why they do it, but if people can recognize their habits, that at least creates awareness.”

To find out, the team compiled a list of 48 statements designed to gauge potential reasons why people visit the platform. Participants were asked to sort these statements in a way that they felt reflected their personal connection to the ideas and then rate them on a scale from “least like me” to “most like me”. After this step, the researchers sat down for an interview with each participant to get a better understanding of why they ranked and rated the way that they did.

Based on the responses, the team says there are four main reasons — translated into four categories — why people hang out on the book: they’re either relationship builders, town criers, window shoppers, or the ever-present selfies. So let’s see what each of them does.

The book of (four) faces

Relationship builders are those who use the platform closest to its indented role: as an extension of their real-life social activity. They post, respond to others’ posts and use additional features primarily to strengthen existing relationships, to interact virtually with real-life friends and family. This group identified strongly with statements such as “Facebook helps me to express love to my family and lets my family express love to me.”

Town criers show a much larger decoupling of their real and virtual life. They’re less concerned with sharing content (photos, stories, other information) about themselves, but will put a lot of effort into informing others of the current events — much like the town criers of yore. You’ll likely spot this group reposting ZME Science, event announcements, or wording their opinion on something they feel strongly about. Beyond that, they’re likely to neglect their profiles and will keep tabs on family and friends through other means.

Window shoppers also use Facebook but rarely post personal information. But in contrast to town criers, co-author Clark Callahan says, these users “want to see what other people are doing. It’s the social-media equivalent of people watching.” They identify with statements such as “I can freely look at the Facebook profile of someone I have a crush on and know their interests and relationship status.”

Stalking funny.

Lastly, the selfies. This group mostly uses Facebook (can you guess?) for self-promotion. Like relationship builders, they’re very energetic posters of content — but unlike them, they do so in an effort to garner likes, comments, and for attention in general. Their end goal, the team says, is to craft and present a social image of themselves “whether it’s accurate or not.” This category identified with the statements such as “The more ‘like’ notification alarms I receive, the more I feel approved by my peers.”

Previous research into social media has explored users falling in the relationship-builder and selfie groups, but the town criers and window shoppers were a novel (and unexpected) find.

“Nobody had really talked about these users before, but when we thought about it, they both made a lot of sense,” Robinson adds.

If you’ve been trying to decide which group you fall into, the authors point out that it’s rarely an exact fit, and you likely identify with more than one category to some degree.

“Everybody we’ve talked to will say, ‘I’m part of this and part of this, but I’m mostly this,'” said Robinson, who calls himself a relationship builder.

The paper “I ♥ FB: A Q-Methodology Analysis of Why People ‘Like’ Facebook” has been published in the International Journal of Virtual Communities and Social Networking.

 

Fake news

Facebook will now tell you if a story might be fake

Fake news

Credit: Facebook.

Social media promised to give each of us a voice and deliver news faster than ever before, in a way that’s not conditioned by mainstream news outlets. While the benefits of social media for the news ecosystem are obvious, it can sometimes do more harm than good. The most recent presidential elections in the United States or the polarizing Brexit referendum in Britain are prime examples. Millions were duped by fake news sites which appeared overnight and died just as fast. Some of the most popular trending stories on facebook were fake.

Facebook has always positioned itself against fake news but, at the same time, it has found itself under a vice with freedom of expression at the other end. Now, it has finally come up with a solution by partnering with third-parties like Poynter International Fact-Checking Network, which includes fact checkers like Snopes or the giant Associated Press.

“We believe in giving people a voice and that we cannot become arbiters of truth ourselves, so we’re approaching this problem carefully,” Adam Mosseri, vice president of product management for News Feed, wrote. “We’ve focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain, and on engaging both our community and third party organizations.”

Facebook users can flag stories as ‘fake’ and these will then be sent to the third-party partners which will fact check them. If the story you’re planning to share is considered fake, like news that a celebrity is dead when in fact the person is still live and well, you will be prompted with a notice that it might be fake. Conversely, users who will see your shared story in their newsfeed will also see the notice. Clicking the link will send you to an explanation of why the story might be factually inaccurate.

The way the system is set up, stories with the most number of flags and shares will be moved up the list of priorities for the 3rd party partners to fact check. There are also algorithms that automatically flag stories for fact checking based on known patterns such as low share numbers after the headline is clicked.

Disputed articles will also be barred from the facebook ad marketplace, in an attempt to crack down on fake news sites which have made quite a deal of money by spreading lies. This way, facebook hopes, spammers will be financially motivated to stop marketing fake news.

It sounds like a good strategy. This way, facebook doesn’t look like it’s taking an arbiter role and users can at least have a sense of how genuine news stories are. Controversial stories will not be flagged as fake, as this is not the purpose of the new roll-out from facebook.

Personally, I feel like this is a much needed update. It remains to be seen whether or not it will prove effective.

Study finds viewing selfies on social media can make you miserable or jolly — depending on how you see yourself

Selfies may be ruining your life, Penn state researchers have found. The researchers have linked frequent viewings of these kind of pictures on social networks with lower levels of self-esteem and life satisfaction.

Image credits Kaique Rocha / Pexels.

OK, disclosure time: I am not a fan of the selfie. Part of that is because I don’t like the way I look in pictures. But the thing that drives me over the edge is that whenever I log into Facebook, I get swamped in a soul-crushing flood of the things. Unless you’re doing something amazing (the textbook definition, like being an astronaut, not the “amazing day!!!” way it’s thrown in selfie descriptions), one of these isn’t warranted.

And it seems like I’m not the only one, as Penn State University researchers have found that “lurking” — the act of observing content on social media without taking an active part in posting, liking, or commenting — can have a detrimental effect on how we view themselves.

Wang and Fan Yang, two mass communications graduates from Penn State together with their graduate adviser, associate professor in communications Michel Haigh, have conducted an online survey to study the effects of posting and viewing selfies and groupies. The duo found that posting didn’t have any significant psychological effects on the participants. Viewing, however, did. The results show that the more often people viewed their own or others’ selfies, the lower their levels of self-esteem and life satisfaction were.

“People usually post selfies when they’re happy or having fun,” said Wang. “This makes it easy for someone else to look at these pictures and think your, his, or her life is not as great as theirs.”

The participants who reported having a stronger desire to appear popular were more sensitive to selfie viewing. In their case, however, viewing selfies and groupies appeared to increase self-esteem and life satisfaction. The team says this likely happens because it satisfied the participants’ desire to appear popular. Frequently viewing groupies tended to have a positive effect on both of the measured traits.

“It is probably because when people view groupies on social media, they feel a sense of community as the groupies they view may also contain themselves,” according to the study,

The researchers hope their findings can help raise awareness about the effects of social media use and how it influences the way we perceive ourselves. The study included 275 people in the US.

“We don’t often think about how what we post affects the people around us,” said Yang. “I think this study can help people understand the potential consequences of their posting behavior. This can help counselors work with students feeling lonely, unpopular, or unsatisfied with their lives.”

But, if you’re feeling down from all that selfie binging on Facebook, the platform can help lift your spirits back up. Just have a quick 5-minute viewing of your own profile, scientists say. Then you’ll remember they use your phone to listen in on anything you say, and you’ll rightfully be sad again. :(

The full paper, titled “Let me take a selfie: Exploring the psychological effects of posting and viewing selfies and groupies on social media” has been published in the journal Telematics and Informatics.

Facebook’s new algorithm could help us promote better science

Facebook is a place where information easily gets distorted. Exaggerated, out-of-context or even downright false stories abound on the social platform, but that may change soon. Facebook is rolling out a new “anti-clickbait” algorithm which may solve at least some of those issues.

“People have told us they like seeing authentic stories the most. That’s why we work hard to understand what type of stories and posts people consider genuine, so we can show more of them in News Feed,” a new announcement said.

Clickbait is a pejorative term used to describe attractive content which lacks value. So you might see a Facebook post with an intriguing title or something which just makes you want to read it – but when you open the article, there’s basically nothing there. At least, nothing of value.

“We’ve heard from people that they specifically want to see fewer stories with clickbait headlines or link titles. These are headlines that intentionally leave out crucial information, or mislead people, forcing people to click to find out the answer. For example: “When She Looked Under Her Couch Cushions And Saw THIS… I Was SHOCKED!”; “He Put Garlic In His Shoes Before Going To Bed And What Happens Next Is Hard To Believe”; or “The Dog Barked At The Deliveryman And His Reaction Was Priceless.””

via Tech Crunch.

Facebook manually assessed the “clickbaitiness” of tens of thousands of articles, scoring each one to see what is authentic and what is spammy to people. To make things even better, if you’re a page that often publishes clickbait, you’ll get a negative score for your entire page. The negative score applies both to the website and the Facebook page, so you can’t just make a new Facebook page for the same site.

Facebook hasn’t made the entire system public, but the gist is clear: you’ll likely see less bad science in your feed.

How this affects us

It doesn’t, really. We do our best to make science attractive while staying well inside the boundaries of accuracy. Sometimes we screw up because, alas, to err is human, but I’m happy to say that you, our readers, are quick to point out the mistakes – and we are quick to fix them.

So, – shameless pitch here – head on to our Facebook page and send us your feedback. We’d love to hear from you!

NASA calls out climate change deniers on Facebook

It’s glorious and depressing at the same time: NASA used its official Facebook account to shut down one user who was misrepresenting climate science:

It’s climate change denial 101: you take some random fact, gobble it up without even thinking about it, add in some buzzwords to make it look more scientific and spit it out as loud as possible. But this guy went a bit further: he went for the old “NASA said this” – which of course, is simply not true. Not only did NASA never assert that fossil fuels are cooling the atmosphere, but they very much in agreement with the rest of the scientists (read: everybody) who understand that climate change is driven by humans.

But don’t take my word for it, check out just a few articles published on the NASA website:

  • Coal and Gas are Far More Harmful than Nuclear Power readsHuman-caused climate change and air pollution remain major global-scale problems and are both due mostly to fossil fuel burning.
  • A blanket around the Earth quotes results from the IPCC: In its Fourth Assessment Report, the Intergovernmental Panel on Climate Change, a group of 1,300 independent scientific experts from countries all over the world under the auspices of the United Nations, concluded there’s a more than 90 percent probability that human activities over the past 50 years have warmed our planet.
  • Discussing the virtual consensus regarding human-driven climate change, this NASA article reads: Multiple studies published in peer-reviewed scientific journals show that 97 percent or more of actively publishing climate scientists agree: Climate-warming trends over the past century are extremely likely due to human activities. In addition, most of the leading scientific organizations worldwide have issued public statements endorsing this position.

They’ve made it abundantly clear several times, yet people still misrepresent them and spread false information. I’m glad someone at NASA wrote such a firm reply, but I’m quite sad they had to do it. I wish people would understand the reality of man-made climate change, even though it’s difficult to swallow.

NASA then went on in the comment section to put in place another user who claimed that the space organization “fudges numbers:”

For the sake of clarification, these views likely stem from a previous study which found that some aerosols released by burning fossil fuels can temporarily cool localized areas by reflecting more radiation. The overall effect though is still overwhelmingly warming.

Sean Parker

Billionaire Sean Parker donates $250 million to accelerate breakthrough cancer immunotherapies

You might know him as the slick Silicon Valley investor who helped Zuckerberg uproot the most important social network on the planet, but did you know Sean Parker is actually one of the most generous philanthropists in the tech space? His most recent effort involves $250 million of his own money and an unprecedented collaboration between six leading cancer centers. The Parker Institute for Cancer Immunotherapy will focus on the latest breakthroughs in the line of immunotherapy against cancer. The end goal is to beat cancer, once and for good.

Sean Parker

Sean Parker. Credit: Wikimedia Commons

“About half of all cancers, if you catch them early enough are readily treatable with chemotherapy, radiation, and surgery. The other 50% are likely going to kill you,” Parker told Fortune in advance of the announcement. “Immunotherapy is the first breakthrough in recent memory that doesn’t just offer some incremental three to six months average life extension. It offers the possibility to beat cancer.”

The 36-year-old made a multi-billion fortune by launching and investing in various startups, including Napster (remember that?), Facebook (Parker was the first investor) and Spotify among others. He is still active in the tech space, but also in philanthropy having invested over $600 million since 2005 through the Parker Foundation.

Most of the money goes to life sciences, with a focus on cancer research. The newly founded Parker Institute for Cancer Immunotherapy will join leading scientists from the Memorial Sloan Kettering Cancer Center, Stanford Medicine, the University of California, Los Angeles, the University of California, San Francisco, the University of Texas MD Anderson Cancer Center, and the University of Pennsylvania.

Immunotherapy is pretty huge because cancer cells are very good at dodging the immune system’s defence. These therapies use solutions that  improve, target, or restore immune system function thereby stopping or slowing the growth of cancer cells and even helping the immune system destroy the cancer cells. Basically, the Parker Institute for Cancer Immunotherapy will fund the “high risk best ideas that may not get funded by the government,” says Jeffrey Bluestone, a prominent immunologist and former University of California, San Francisco official who now heads the institute.

Infographic by Asian Scientist.

Parker’s beef with current research is that, while survival rates have gone up, these are painfully slow. The five-year survival rate for lung cancer has gone up  from just over 13% to about 17%.since 1995. Now, with so much funding and a dedicated team focused on improving sharing and collaborative progress between the involved parties, scientists are free to make some real breakthroughs.

“My belief, my sincere belief, is that this is very early days for cancer immunotherapy, and that most of the breakthroughs are still to come,” said Parker. “We have a proof of concept that this works in certain cancers, and now the hard work of expanding immunotherapy to many cancers begins.”

Hats off to Mr. Parker.We need more generous people like him or Tej Kohli, who can funnel some of their vast wealth towards making the world a better place.

Six_degrees_of_separation_01

Facebook turns six degrees of separation into 3.57

Six_degrees_of_separation_01

The idea of six degrees of separation was introduced more than 80 years ago. It suggests that you are six introductions away from meeting anyone in the world. In other words, everyone in the world is connected through a chain of six links. For some, fewer introductions are required to come in direct contact with Barrack Obama or Stephen Hawking. A study made at Facebook suggests that, among its users at least, there are now only 3.57 degrees of separation on average.

The theory was first proposed in 1929 by the Hungarian writer Frigyes Karinthy in a short story called “Chains.” The idea became very popular and in time mathematicians jumped the wagon to prove or disprove it. For decades they were unsuccessful. Then in 1967, American sociologist Stanley Milgram framed the problem a tad differently into the “the small-world problem”. Participants selected at random from the mid-West were asked to send a package to a stranger in Massachusetts.  The senders knew the recipient’s name, occupation, and general location, but not the address. They were instructed to send the package to a person they knew on a first-name basis who they thought was most likely, out of all their friends, to know the target personally. That person would do the same, and so on, until the package was personally delivered to its target recipient.

Initially, everyone thought the package would have to change hands 100 times, but it only took between five and seven intermediaries for the package to reach the rightful owner. The findings were published and further popularized the concept. It led to the phrase “six degrees of separation”, the title of a play and film made subsequently. Even Hollywood has its own version called Six Degrees of Kevin Bacon.  It’s a trivia game that challenges players to find the shortest path between actor Kevin Bacon and another actor – through his or her film roles.

facebook degrees of separation

Image: FB

In 2001, Duncan Watts, a professor at Columbia University, made his own six degree research and recreated Milgram’s experiment on the internet. In this case, the package was an e-mail. After 48,000 senders and 19 targets in 157 countries, the average number of connection required was indeed six.

Social networks are definitely an upgrade and provide an even more refine look. According to Facebook, there are 3.57 intermediaries required to connected any of its 1.6 billion users with one another. As the world becomes increasingly connected, this separation will decline. In 2008 the number was 4.28.

The number of connections depend on geography and user density. In the U.S.,  there are an average of 3.47 degrees of separation. It also depends on the person. Mark Zuckerberg, the founder of Facebook, has 3.17 degrees. You can check your own degrees of separation here.

 

facebook reactions

Facebook is squeezing human emotions into five emoticons

facebook reactions

A lot of people have complained that a “like” isn’t enough to express themselves on Facebook. The Menlo Park social giant has been working for some time now on a new feature that will address just this. Soon, users will be able to chose from five more internationally-recognized emotions to sit alongside the infamous “like”. But won’t this do more harm than good, as it will generate clutter? Is messing with what looks like a perfect recipe wise?

Facebook seems to have this figured out — after all it’s in the company’s best interest. How come? Well asides from improving user experience, adding emotions means more money in the bank for Facebook. Their business is advertising, and nowadays advertising is all about reading users’ behaviour. By adding more depth to the “like”, Facebook is essentially improving its knowledge base. It will be more aware of what makes you “angry”, “sad”, “wow”, laugh “Haha” or “love”. Apparently, “yay” was dropped since it wasn’t universally understood. Notice there’s no word on “dislike”. Though considered, “dislike” has been rejected on the ground that it fosters negativity and Facebook has to be a happy, happy wonderland!

facebook-reactions

Image: Mashable

To confine human emotions to these basic social network expressions, Facebook hired sociologists and conducted extensive tests. You might even have seen it somewhere, since it was in testing for a limited group of users. For everyone else, the reactions will be introduced “in the next few weeks.” To use one of the new reactions, you’ll have to hold your thumb on the ‘Like’ button to scroll through and select the one that sums up your mood.

For more on how Facebook is conducting psychological experiments and treating billions like guinea pigs, read ZME Science posts here and here.