Connect with us

Social Media

Facebook now allows users to add songs to photos, videos

Published

on

San Francisco, 25 October : Offering its over two billion monthly users new ways to express themselves, Facebook has introduced new music features, including an option to add a song to photos and videos they share to Facebook Stories.

“And, we’re bringing it to News Feed, too!,” Facebook said in a statement on Wednesday, adding that users would soon be able to add songs to their Profile as well.

Adding a song to a photo or video on Facebook works in the same way the feature functions on Instagram. Just take a photo or video, tap on the sticker icon and select the music sticker. Once you find the song of your choice, you can pick the perfect part to share and add the sticker with the artist and song name. Users can move the sticker around and add other stickers and effects to customise their story. Facebook said it was also rolling out “Lip Sync Live”, a feature Facebook introduced in June to let users lip sync to songs, to all profiles in many countries around the world.

“We are also opening up the feature to more artists and creators by expanding to Pages, giving them more ways to connect with their fans,” said Facebook’s Fred Beteille, Head of Product, Music and Rights, and Tamara Hrivnak, Head of Music Business Development and Partnerships.

Breaking

Facebook says it didn’t allow sharing users data with third parties

Published

on

By

San Francisco, Dec 20 : Facebook has reiterated that it never allowed its partners like Netflix or Spotify to access users’ private messages without their permission.

In a new blog post, Facebook Vice President of Product Partnerships Ime Archibong said late on Wednesday that the social networking giant worked closely with four partners to integrate messaging capabilities into their products so that people could message their Facebook friends — but only if they chose to use Facebook Login.

“These experiences are common in our industry — think of being able to have Alexa read your email aloud or to read your email on Apple’s Mail app,” said Archibong.

The second round of rebuttal came after a New York Times report claimed that Facebook allowed large technology companies and popular apps like Netflix or Spotify access to its users’ personal information.

“People could message their friends about what they were listening to on Spotify or watching on Netflix, share folders on Dropbox, or get receipts from money transfers through the Royal Bank of Canada app.

“These experiences were publicly discussed. And they were clear to users and only available when people logged into these services with Facebook. However, they were experimental and have now been shut down for nearly three years,” said Archibong.

In a statement given to IANS on Thursday, Netflix said that over the years it has tried various ways to make the platform more social.

“One example of this was a feature we launched in 2014 that enabled members to recommend TV shows and movies to their Facebook friends via Messenger or Netflix.

“It was never that popular so we shut the feature down in 2015. At no time did we access people’s private messages on Facebook or ask for the ability to do so,” said a Netflix spokesperson.

According to Facebook, it worked with partners to build messaging integrations into their apps so people could send messages to their Facebook friends.

“No third party was reading your private messages or writing messages to your friends without your permission. Many news stories imply we were shipping over private messages to partners, which is not correct,” stressed Archibong.

According to Facebook, these partnerships were agreed via extensive negotiations and documentation, detailing how the third party would use the API, and what data they could and couldn’t access.

Earlier, reacting to the New York Times report, Facebook had said it did not give large tech companies access to people’s data without their permission as its integration partners “had to get authorization from people”.

According to the company, “none of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC (Federal Trade Commission).

“Our integration partners had to get authorization from people. You would have had to sign in with your Facebook account to use the integration offered by Apple, Amazon or another integration partner,” said Konstantinos Papamiltiadis, Director of Developer Platforms and Programmes.

Continue Reading

Breaking

Facebook begins verifying political ads in India ahead of 2019 polls

Published

on

By

New Delhi, 07 December : Facing intense scrutiny over the misuse of its platform globally during elections, Facebook has announced fresh steps to increase ad transparency and defend against foreign interference ahead of the 2019 Lok Sabha polls in India.

Now anyone who wants to run an ad in India related to politics will need to first confirm their identity and location, and give more details about who placed the ad, the social networking giant said in a statement late Thursday.

“We’re making big changes to the way we manage these ads on Facebook and Instagram. We’ve rolled out these changes in the US, Brazil and the UK, and next, we’re taking our first steps towards bringing transparency to ads related to politics in India,” said Sarah Clark Schiff, Product Manager at Facebook.

“This is key as we work hard to prevent abuse on Facebook ahead of India’s general elections next year.”

Facebook said the identity and location confirmation will take a few weeks. So those planning to run political ads next year should better start the verification process now by using their mobile phones or computer to submit proof of identity and location.

“This will help avoid delays when they run political ads next year,” informed Schiff.

Advertisers in India can download the latest Facebook app and visit Settings to get started.

Early 2019, Facebook would also start to show a disclaimer on all political ads that provides more information about who’s placing the ad, and an online searchable Ad Library for anyone to access.

“This is a library of all ads related to politics from a particular advertiser as well as information like the budget associated with an individual ad, a range of impressions, as well as the demographics of who saw the ad,” said Facebook.

At that time, the company would also begin to enforce the policy that requires all ads related to politics be run by an advertiser who’s completed the authorisations process and be labelled with the disclaimer.

“We will not require eligible news publishers to get authorised, and we won’t include their ads in the Ad Library,” Facebook added.

Visiting India couple of months ago, Richard Allan, Facebook’s Vice President for Global Policy Solutions, said that the social networking giant was in the process of establishing a task force comprising “hundreds of people” in the country to prevent bad actors from abusing its platform.

“With the 2019 elections coming, we are pulling together a group of specialists to work together with political parties,” he said.

Facebook has been under intense scrutiny ever since allegations of Russia-linked accounts using the social networking platform to spread divisive messages during the 2016 presidential election surfaced.

Echoing Facebook CEO Mark Zuckerberg’s earlier comments on elections across the world, Allan said the social media platform “wants to help countries around the world, including India, to conduct free and fair elections”.

In April, Zuckerberg said Facebook will ensure that its platform is not misused to influence elections in India and elsewhere.

“Our goals are to understand Facebook’s impact on upcoming elections — like Brazil, India, Mexico and the US midterms — and to inform our future product and policy decisions,” he told the US lawmakers during a hearing.

Continue Reading

Breaking

WhatsApp selects 20 teams to curb fake news globally, including India

Published

on

By

New Delhi, 13 November : Facebook-owned WhatsApp on Tuesday announced that it has selected 20 research teams worldwide – including experts from India and those of Indian origin — who will work towards how misinformation spreads and what additional steps the mobile messaging platform could take to curb fake news.

Shakuntala Banaji from London School of Economics and Political Science (LSE), Anushi Agrawal and Nihal Passanha from Bengaluru-based media and arts collective “Maraa” and Ramnath Bhat from LSE have been selected for the paper titled “WhatsApp Vigilantes? WhatsApp messages and mob violence in India”. The research examines the ways in which WhatsApp users understand and find solutions to the spate of “WhatsApp lynchings” that has killed over 30 people so far.

The Indian government has also directed WhatsApp to take necessary remedial measures to prevent proliferation of fake and, at times, motivated/sensational messages on its platform. Among others selected were Vineet Kumar from Ranchi-headquartered Cyber Peace Foundation (principal investigator), Amrita Choudhary, President of the Delhi-based non-profit Cyber Café Association of India (CCAOI) and Anand Raje from Cyber Peace Foundation.

They will work as a team on the paper titled “Digital literacy and impact of misinformation on emerging digital societies”.

P.N. Vasanti from Centre for Media Studies in New Delhi woll work withS. Shyam Sundar, The Pennsylvania State University (Principal Investigator) to examine the role of content modality in vulnerability to misinformation, under the topic titled “Seeing is Believing: Is Video Modality More Powerful in Spreading Fake News?”

WhatsApp had issued a call for papers in July this year and received proposals from over 600 research teams around the world.

“Each of the 20 research teams will receive up to $50,000 for their project (for a total of $1 million),” WhatsApp said in a statement.

Lipika Kamra from O.P. Jindal Global University and Philippa Williams from the Queen Mary University of London (Principal Investigator) will examine the role of WhatsApp in everyday political conversations in India, in the context of India’s social media ecosystem.

According to Mrinalini Rao, lead researcher at WhatsApp, the platform cares deeply about the safety of its over 1.5 billion monthly active users globally and over 200 million users in India.

“We appreciate the opportunity to learn from these international experts about how we can continue to help address the impact of misinformation,” Rao said.

“These studies will help us build upon recent changes we have made within WhatsApp and support broad education campaigns to help keep people safe,” she added.

The recipients are from countries including Brazil, India, Indonesia, Israel, Mexico, Netherlands, Nigeria, Singapore, Spain, the UK and US. WhatsApp said it is hosting them in California this week so they can hear from product leaders about how it builds its product.

“Given the nature of private messaging – where 90 per cent of the messages sent are between two people and group sizes are strictly limited – our focus remains on educating and empowering users and proactively tackling abuse,” said the company.

WhatsApp recently implemented a “forward label” to inform users when they received a message that was not originally written by their friend or loved one. To tackle abuse, WhatApp has also set a limit on how many forwards can be sent. In India, WhatsApp has partnered with the Digital Empowerment Foundation to train community leaders in several states on how to address misinformation.

“We are also running ads in several languages — in print, online, and on over 100 radio stations — amounting to the largest public education campaign on misinformation anywhere in the world,” the company noted.

Sayan Banerjee from University of Essex, Srinjoy Bose from University of New South Wales and Robert A. Johns from University of Essex will study “Misinformation in Diverse Societies, Political Behaviour & Good Governance”.

Santosh Vijaykumar from Northumbria University, Arun Nair from Health Systems Research India Initiative and Venkat Chilukuri, Srishti Institute of Art, Design and Technology are part of the team that will study “Misinformation Vulnerabilities among Elderly during Disease Outbreaks”.

Continue Reading

Trending Live

Advertisement

Latest Post

Big News

Advertisement

Sports

Advertisement

Entertainment

Trending