LinkedIn ran undisclosed social experiments on more than 20 million users for five years, testing the importance of weak associations or acquaintances in the process of positively affecting an individual’s job mobility.
The algorithmic experiments, which ran between 2015 and 2019, were conducted by randomly changing suggested contacts “during which 2 billion new ties and 600,000 new jobs were created.” The study was co-authored by researchers at LinkedIn, the Massachusetts Institute of Technology, Stanford University, and Harvard Business School.
After giving users connection-recommendations, researchers analyzed the new jobs that came as a result of the new connections. Authors of the study were studying a social-scientific theory based on which weak connections, like “friends of friends,” were instrumental to better job opportunities compared to strong connections.
The theory formulated by Stanford professor Mark Granovetter was confirmed by the Linkedin study, but the authors provided some revisions to the theory as well.
Suggested Revisions
First, although there was an initial positive impact on job mobility, it did not sustain, and the researchers found that “there were diminishing marginal returns to tie weakness.”Second, there were varying effects based on interaction intensity and the number of mutual connections as “moderately weak ties (measured by mutual connections) and the weakest ties (measured by interaction intensity) created the most job mobility.”
Ethics of Social Experiments
Although Linkedin’s privacy policy states that the company reserves the right to use personal data “available to us to research social, economic, and workplace trends” and “conduct research,” privacy advocates have noted their disagreement with using people’s data without their consent.Sinan Aral, a management and data science professor at MIT and lead author of the study, told USA Today that researchers “received no private or personally identifying data during the study and only made aggregate data available for replication purposes to ensure further privacy safeguards.”
“The study was vetted and approved by the MIT Committee on the Use of Human Subjects in research, and these types of algorithm experiments, in addition to helping platforms improve, are also standard across the industry,” Aral said.