SNS Review #2: FACEBOOK and Flicker's PURGE of Different Voices

Updated: Jun 6

#Flicker #Facebook #Complaints

FREEDOM OF EXPRESSION MUST BE RESPECTED
Image: Copyrighted by Ryota Nakanishi

FREEDOM OF EXPRESSION MUST BE RESPECTED


Important


Image: I love The First Amendment of US Constitution

The US first amendment clearly stated that ''Amendment I

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.'' (1)

URL: https://www.law.cornell.edu/constitution/first_amendment


However US tech. giants are recently purging different voices more harshly than ever. As the result, there are increasing numbers of complaints against their social network service users around the world.


Some US tech giants are managing their different language sites in some countries or regions where they don't even have their physical offices. The internet operations are actually affecting too many people's lives without borders. This is why cross border or borderless regulations for these tech giants are necessary like increasing corporate taxation.


On Facebook violations, Germany and US these respected countries and legitimate governments bravely started punishing the giant in 2019.


FACTS


1.Germany's Federal Office of Justice (BfJ) on Tuesday fined US social media giant Facebook €2 million ($2.3 million) for underreporting the number of complaints it had received about illegal content on its platform.


The fine was levied for infractions against Germany's internet transparency law known as NetzDG.

The law requires companies like Facebook and Twitter to remove posts that contain hate speech or incite violence within 24 hours or face fines as high as €50 million. Social media companies are also required to file reports on their progress every six months.

Read more: Germany implements new internet hate speech crackdown


Skewing the facts


Justice Minister Christine Lambrecht accused the company of skewing the extent of such violations by selectively reporting complaints.


Lambrecht said it was exceedingly difficult for a user to complain to Facebook about posts that violate NetzDG. She added that by comparison, it was far easier to complain about posts that violate the site's softer community standards.


But Lambrecht said that Facebook's decision to only tally the NetzDG complaints meant the total number was artificially low.


That selective reporting led Facebook to cite a total of 1,704 complaints from January 1, 2018 to June 30, 2018. That number was significantly lower than those that YouTube (215,000) and Twitter (265,000) reported for the full year.


The Office of Justice addressed that problem, saying, "The BfJ assumes that the number of complaints received via the widely known flagging channel is considerable and that what is presented in the published report is therefore incomplete."


Read more: Facebook slams proposed German 'anti-hate speech' social media law


Strict laws governing online speech


NetzDG, which went into effect on January 1, 2018, is among the strictest in the world governing privacy and online speech. It outlaws anti-Semitic speech as well as hateful or inciting speech directed at persons or groups on the grounds of religion or ethnicity.

Facebook has desperately attempted to clean up its image regarding transparency in the wake of a number of scandals involving elections in the US, UK, and the Philippines. The company has said it will review Tuesday's decision and may appeal it.

Facebook said: "We want to remove hate speech as quickly and effectively as possible and work to do so. We are confident our published NetzDG reports are in accordance with the law, but as many critics have pointed out, the law lacks clarity."


— js/amp (AFP, AP, Reuters) (2)

URL:

https://www.dw.com/en/germany-fines-facebook-for-underreporting-hate-speech-complaints/a-49447820


2. The Federal Trade Commission announced on Wednesday that Facebook agreed to pay a record-breaking $5 billion fine over privacy violations. The penalty follows an investigation into the Cambridge Analytica data-sharing scandal.


“The $5 billion penalty against Facebook is the largest ever imposed on any company for violating consumers’ privacy and almost 20 times greater than the largest privacy or data security penalty ever imposed worldwide,” the FTC said in a press release. It added that “it is one of the largest penalties ever assessed by the US government for any violation.”


According to FTC Chairman Joe Simons, “Despite repeated promises to its billions of users worldwide that they could control how their personal information is shared, Facebook undermined consumers’ choices.”


Facebook has also agreed to pay an additional $100 million to settle allegations that it misled investors about the seriousness of the misuse of users’ data, the Securities and Exchange Commission (SEC) said.


The FTC first started probing Facebook in March 2018 after reports that the social network allowed as many as 87 million users’ data to fall into the possession of political consulting firm Cambridge Analytica.


Allowing access to the data was a violation of a 2012 FTC consent decree.


Despite the fact that the $5 billion fine is the biggest in FTC history, far bigger than the $22 million fine levied against Google in 2012, critics have argued that the penalty is too low. The fine has been dubbed an “embarrassing joke,” a “parking ticket,” and a “sweetheart deal.”


The ‘largest’ FTC fine in US history represents basically a month of Facebook’s revenue. The company had $15 billion in revenue last quarter and $22 billion in profit in 2018.

US Senator Richard Blumenthal, a Democrat from Connecticut, called the settlement “a fig leaf” that offers “no accountability for top executives.”


“By relying on a monetary fine to deter Facebook, the FTC has failed to heed history’s lessons. Facebook has already written this penalty down as a one-time-cost in return for the extraordinary profits reaped from a decade of data misuse,” Blumenthal was quoted as saying by Reuters.

There was also criticism leveled at the FTC for not holding Facebook founder and CEO Mark Zuckerberg personally responsible. Democratic FTC Commissioner Rebecca Slaughter said the trade commission should have taken Zuckerberg to court.


— RT (3)

URL:

https://www.rt.com/business/464958-facebook-record-fine-privacy/


Comment


Facebook's Violations Against Laws and Human Rights of its Users must be Punished.
Image: Copyrighted by Ryota Nakanishi

For Facebook's violations of its notoriously biased censorship on hate speeches and violations of privacy of its users must be punished by respected governments who are determined to protect their own people and other countries' users. No thing bad for users if these governments take necessary actions against uncontrollable social network giants like this one.


Flicker


Flicker is providing its uploaded photos for facial recognition surveillance without any permission or communications of users.
Image: Public Domain

Flicker is now part of US company which is seemed as photo sharing social network service of US however its users are complaining about their purges which similar to other SNS like Facebook. Moreover, Flicker is the potential violator of users' privacy by providing its uploaded photos for unspeakable facial recognition surveillance without any permission and communications of the users. Like other US and its allies' SNS companies, they are highly biased on political issues including Hong Kong's extradition bill issue. At present, Flicker's scandal news is limited but actual news and many complaints we can read online.


Facts


1. Facial recognition can log you into your iPhone, track criminals through crowds and identify loyal customers in stores.


The technology — which is imperfect but improving rapidly — is based on algorithms that learn how to recognize human faces and the hundreds of ways in which each one is unique.


To do this well, the algorithms must be fed hundreds of thousands of images of a diverse array of faces. Increasingly, those photos are coming from the internet, where they’re swept up by the millions without the knowledge of the people who posted them, categorized by age, gender, skin tone and dozens of other metrics, and shared with researchers at universities and companies.


As the algorithms get more advanced — meaning they are better able to identify women and people of color, a task they have historically struggled with — legal experts and civil rights advocates are sounding the alarm on researchers’ use of photos of ordinary people. These people’s faces are being used without their consent, in order to power technology that could eventually be used to surveil them.

That’s a particular concern for minorities who could be profiled and targeted, the experts and advocates say.


“This is the dirty little secret of AI training sets. Researchers often just grab whatever images are available in the wild,” said NYU School of Law professor Jason Schultz.

The latest company to enter this territory was IBM, which in January released a collection of nearly a million photos that were taken from the photo hosting site Flickr and coded to describe the subjects’ appearance. IBM promoted the collection to researchers as a progressive step toward reducing bias in facial recognition.


But some of the photographers whose images were included in IBM’s dataset were surprised and disconcerted when NBC News told them that their photographs had been annotated with details including facial geometry and skin tone and may be used to develop facial recognition algorithms. (NBC News obtained IBM’s dataset from a source after the company declined to share it, saying it could be used only by academic or corporate research groups.)


“None of the people I photographed had any idea their images were being used in this way,” said Greg Peverill-Conti, a Boston-based public relations executive who has more than 700 photos in IBM’s collection, known as a “training dataset.”


“It seems a little sketchy that IBM can use these pictures without saying anything to anybody,” he said.


John Smith, who oversees AI research at IBM, said that the company was committed to “protecting the privacy of individuals” and “will work with anyone who requests a URL to be removed from the dataset.”


Despite IBM’s assurances that Flickr users can opt out of the database, NBC News discovered that it’s almost impossible to get photos removed. IBM requires photographers to email links to photos they want removed, but the company has not publicly shared the list of Flickr users and photos included in the dataset, so there is no easy way of finding out whose photos are included. IBM did not respond to questions about this process.


To see if your Flickr photos are part of the dataset, enter your username in a tool NBC News created based on the IBM dataset:


IBM says that its dataset is designed to help academic researchers make facial recognition technology fairer. The company is not alone in using publicly available photos on the internet in this way. Dozens of other research organizations have collected photos for training facial recognition systems, and many of the larger, more recent collections have been scraped from the web.


Some experts and activists argue that this is not just an infringement on the privacy of the millions of people whose images have been swept up — it also raises broader concerns about the improvement of facial recognition technology, and the fear that it will be used by law enforcement agencies to disproportionately target minorities.


“People gave their consent to sharing their photos in a different internet ecosystem,” said Meredith Whittaker, co-director of the AI Now Institute, which studies the social implications of artificial intelligence. “Now they are being unwillingly or unknowingly cast in the training of systems that could potentially be used in oppressive ways against their communities.”


How facial recognition has evolved


In the early days of building facial recognition tools, researchers paid people to come to their labs, sign consent forms and have their photo taken in different poses and lighting conditions. Because this was expensive and time consuming, early datasets were limited to a few hundred subjects.

With the rise of the web during the 2000s, researchers suddenly had access to millions of photos of people.


“They would go into a search engine, type in the name of a famous person and download all of the images,” said P. Jonathon Phillips, who collects datasets for measuring the performance of face recognition algorithms for the National Institute of Standards and Technology. “At the start these tended to be famous people, celebrities, actors and sports people.”

As social media and user-generated content took over, photos of regular people were increasingly available. Researchers treated this as a free-for-all, scraping faces from YouTube videos, Facebook, Google Images, Wikipedia and mugshot databases.


Academics often appeal to the noncommercial nature of their work to bypass questions of copyright. Flickr became an appealing resource for facial recognition researchers because many users published their images under “Creative Commons” licenses, which means that others can reuse their pictures without paying license fees. Some of these licenses allow commercial use.


To build its Diversity in Faces dataset, IBM says it drew upon a collection of 100 million images published with Creative Commons licenses that Flickr’s owner, Yahoo, released as a batch for researchers to download in 2014. IBM narrowed that dataset down to about 1 million photos of faces that have each been annotated, using automated coding and human estimates, with almost 200 values for details such as measurements of facial features, pose, skin tone and estimated age and gender, according to the dataset obtained by NBC News.


It’s a single case study in a sea of datasets taken from the web. According to Google Scholar, hundreds of academic papers have been written on the back of these huge collections of photos — which have names like MegaFace, CelebFaces and Faces in the Wild — contributing to major leaps in the accuracy of facial recognition and analysis tools. It was difficult to find academics who would speak on the record about the origins of their training datasets; many have advanced their research using collections of images scraped from the web without explicit licensing or informed consent.

The researchers who built those datasets did not respond to requests for comment.


How IBM is using the face database


IBM released its collection of annotated images to other researchers so that it can be used to develop “fairer” facial recognition systems. That means systems can more accurately identify people of all races, ages and genders.


“For the facial recognition systems to perform as desired, and the outcomes to become increasingly accurate, training data must be diverse and offer a breadth of coverage,” said IBM’s John Smith, in a blog post announcing the release of the data.


The dataset does not link the photos of people’s faces to their names, which means any system trained to use the photos would not be able to identify named individuals. But civil liberty advocates and tech ethics researchers have still questioned the motives of IBM, which has a history of selling surveillance tools that have been criticized for infringing on civil liberties.


For example, in the wake of the 9/11 attacks, the company sold technology to the New York City police department that allowed it to search CCTV feeds for people with particular skin tones or hair color. IBM has also released an “intelligent video analytics” product that uses body camera surveillance to detect people by “ethnicity” tags, such as Asian, black or white.


IBM said in an email that the systems are “not inherently discriminatory,” but added: “We believe that both the developers of these systems and the organizations deploying them have a responsibility to work actively to mitigate bias. It’s the only way to ensure that AI systems will earn the trust of their users and the public. IBM fully accepts this responsibility and would not participate in work involving racial profiling.”


Today, the company sells a system called IBM Watson Visual Recognition, which IBM says can estimate the age and gender of people depicted in images and, with the right training data, can be used by clients to identify specific people from photos or videos.