Connect with us

World

Facebook and Signal are fighting over an ad campaign. Here’s why

Published

on

Facebook CEO Mark Zuckerberg

Marlene Awaad | Bloomberg | Getty Images

Facebook, the world’s largest social media platform, found itself in a public dispute with communications app Signal this week over an ad campaign.

The encrypted messaging service — a non-profit that rivals Facebook-owned WhatsApp — said in a blog on Tuesday that Facebook had blocked one of its ad campaigns on Instagram, which is owned by Facebook.

The campaign was designed to show Instagram users the amount of data that Instagram and parent firm Facebook collect on users.

“We created a multi-variant targeted ad designed to show you the personal data that Facebook collects about you and sells access to,” Signal wrote. “The ad would simply display some of the information collected about the viewer, which the advertising platform uses.”

Signal used Instagram’s own adtech tools to target the ads at users. Here is example text of one of the ads from Signal: “You got this ad because you’re a teacher, but more importantly you’re a Leo (and single). This ad used your location to see you’re in Moscow. You like to support sketch comedy, and this ad thinks you do drag.”

Signal said that Facebook “wasn’t into that idea” and claimed that its ad account had been disabled as a result.

“Being transparent about how ads use people’s data is apparently enough to get banned,” Signal wrote. “In Facebook’s world, the only acceptable usage is to hide what you’re doing from your audience.”

Facebook described the ad campaign as a stunt and claimed that Signal had never actually tried to run the Instagram campaign.

“This is a stunt by Signal, who never even tried to actually run these ads — and we didn’t shut down their ad account for trying to do so,” a Facebook spokesperson told CNBC on Thursday.

“If Signal had tried to run the ads, a couple of them would have been rejected because our advertising policies prohibit ads that assert that you have a specific medical condition or sexual orientation, as Signal should know. But of course, running the ads was never their goal — it was about getting publicity.”

Signal countered on Twitter that it “absolutely did” try to run the ads. “The ads were rejected, and Facebook disabled our ad account. These are real screenshots, as Facebook should know.”

Joe Osborne, a Facebook spokesperson, responded on Twitter on Wednesday saying the screenshots are from early March “when the ad account was briefly disabled for a few days due to an unrelated payments issue.”

Osborne added: “The ads themselves were never rejected as they were never set by Signal to run. The ad account has been available since early March, and the ads that don’t violate our policies could have run since then.”

Signal is funded by Brian Acton, the entrepreneur who sold WhatsApp to Facebook for $22 billion, making himself a billionaire several times over in the process.

Acton left Facebook and WhatsApp in 2017 and later claimed that Facebook was laying the groundwork to show targeted ads and facilitate commercial messaging in WhatsApp.

Following the Cambridge Analytica scandal, Acton tweeted: “It is time. #deletefacebook.”

Venture capitalist Bill Gurley said on Thursday that the Signal vs. Facebook story is “remarkable.”

“The biggest threat to Facebook is a non-profit funded by WhatsApp founders,” he said. “Such a great story. What was Facebook argument for banning these ads? Too much transparency? My favorite prize fight.”

Source link

World

WHO says delta Covid variant has now spread to 80 countries, and it keeps mutating

Published

on

A mobile Covid-19 vaccination centre outside Bolton Town Hall, Bolton, where case numbers of the Delta variant first identified in India have been relatively high.

Peter Byrne | PA Images | Getty Images

The delta Covid variant first detected in India has now spread to more than 80 countries, and it continues to mutate as it spreads across the globe, World Health Organization officials said Wednesday.

The variant now makes up 10% of all new cases in the United States, up from 6% last week. Studies have shown the variant is even more transmissible than other variants. WHO officials said some reports have found that it also causes more severe symptoms, but more research is needed to confirm those conclusions.

The WHO is also tracking recent reports of a “delta plus” variant, “what I think this means is that there is an additional mutation that has been identified,” Maria Van Kerkhove, WHO’s Covid-19 technical lead said. “In some of the delta variants we’ve seen one less mutation or one deletion instead of an additional, so we’re looking at all of it.”

The United Kingdom recently saw the delta variant become the dominate strain there, surpassing its native alpha variant, which was first detected in the country last fall. The delta variant now makes up more than 60% of new cases in the U.K.

Dr. Anthony Fauci, chief medical advisor to the president said last week that “we cannot let that happen in the United States,” as he pushed to get more people vaccinated, especially young adults.

The Centers for Disease Control and Prevention designated the delta variant as a variant of concern in the U.S. on Tuesday. The WHO designated the delta variant as a variant of concern in early May.

The WHO has also added another Covid mutation on Tuesday to its list of variants of interest, the lambda variant. The agency is monitoring more than 50 different Covid variants, but not all become enough of a public health threat to make the WHO’s formal watchlist. The lambda variant has multiple mutations in the spike protein that could have an impact on its transmissibility, but more studies are needed to fully understand the mutations, Van Kerkhove said.

The lambda variant has been detected by scientists in South American, including in Chile, Peru, Ecuador and Argentina, thanks to increased genomic surveillance.

Source link

Continue Reading

World

Facebook scientists say they can tell where deepfakes come from

Published

on

An example of a deepfake created by CNBC

Kyle Walsh

Artificial intelligence researchers at Facebook and Michigan State University say they have developed a new piece of software that can reveal where so-called deepfakes have come from.

Deepfakes are videos that have been digitally altered in some way with AI. They’ve become increasingly realistic in recent years, making it harder for humans to determine what’s real on the internet, and indeed Facebook, and what’s not.

The Facebook researchers claim that their AI software — announced on Wednesday — can be trained to establish if a piece of media is a deepfake or not from a still image or a single video frame. Not only that, they say the software can also identify the AI that was used to create the deepfake in the first place, no matter how novel the technique.

Tal Hassner, an applied research lead at Facebook, told CNBC that it’s possible to train AI software “to look at the photo and tell you with a reasonable degree of accuracy what is the design of the AI model that generated that photo.”

The research comes after MSU realized last year that it’s possible to determine what model of camera was used to take a specific photo — Hassner said that Facebook’s work with MSU builds on this.

‘Cat and mouse game’

Deepfakes are bad news for Facebook, which is constantly battling to keep fake content off of its main platform, as well as Messenger, Instagram and WhatsApp. The company banned deepfakes in Jan. 2020 but it struggles to swiftly remove all of them from its platform.

Hassner said that detecting deepfakes is a “cat and mouse game,” adding that they’re becoming easier to produce and harder to detect.

One of the main applications of deepfakes so far has been in pornography where a person’s face is swapped onto someone else’s body, but they’ve also been used to make celebrities appear as though they’re doing or saying something they’re not.

Indeed, a set of hyper realistic and bizarre Tom Cruise deepfakes on TikTok have now been watched over 50 million times, with many struggling to see how they’re not real.

Today, it’s possible for anyone to make their own deepfakes using free apps like FakeApp or Faceswap.

Deepfake expert Nina Schick, who has advised U.S. President Joe Biden and French President Emmanuel Macron, said at the CogX AI conference on Monday that detecting deepfakes isn’t easy.

In a follow up email she told CNBC that Facebook and MSU’s work “looks like a pretty big deal in terms of detection” but stressed that it’s important to find out how well deepfake detection models actually work in the wild.

“It’s all well and good testing it on a set of training data in a controlled environment,” she said, adding that “one of the big challenges seems that there are easy ways to fool detection models — i.e. by compressing an image or a video.”

Tassner admitted that it might be possible for a bad actor to get around the detector. “Would it be able to defeat our system? I assume that it would,” he said.

Broadly speaking, there are two types of deepfakes. Those that are wholly generated by AI, such as the fake human faces on www.thispersondoesnotexist.com, and others that use elements of AI to manipulate authentic media.

Schick questioned whether Facebook’s tool would work on the latter, adding that “there can never be a one size fits all detector.” But Xiaoming Liu, Facebook’s collaborator at Michigan State, said the work has “been evaluated and validated on both cases of deepfakes.” Liu added that the “performance might be lower” in cases where the manipulation only happens in a very small area.

Chris Ume, the synthetic media artist behind the Tom Cruise deepfakes, said at CogX on Monday that deepfake technology is moving rapidly.

“There are a lot of different AI tools and for the Tom Cruise, for example, I’m combining a lot of different tools to get the quality that you see on my channel,” he said.

It’s unclear how or indeed if Facebook will look to apply Tassner’s software to its platforms. “We’re not at the point of even having a discussion on products,” said Tassner, adding that there’s several potential use cases including spotting coordinated deepfake attacks.

“If someone wanted to abuse them (generative models) and conduct a coordinated attack by uploading things from different sources, we can actually spot that just by saying all of these came from the same mold we’ve never seen before but it has these specific properties, specific attributes,” he said.

As part of the work, Facebook said it has collected and catalogued 100 different deepfake models that are in existence.

Source link

Continue Reading

World

Apple CEO Tim Cook rips EU’s proposed Digital Markets Act

Published

on

Tim Cook, chief executive officer of Apple, speaks at the 2019 Dreamforce conference in San Francisco on November 19, 2019.

David Paul Morris | Bloomberg | Getty Images

Apple CEO Tim Cook said that he believes a proposed European law known as DMA would “not be in the best interest of users,” signaling the iPhone maker’s opposition to European legislation that would force it to allow users to install software outside of Apple’s App Store.

“I look at the tech regulation that’s being discussed, I think there are good parts of it. And I think there are parts of it that are not in the best interests of the user,” Cook said on Wednesday through videoconference at the Viva Tech conference in France.

The European Union proposed two laws regulating big tech companies, the Digital Services Act and the Digital Markets Act, earlier this year. The DSA focuses on the online ad industry, but the DMA focuses on companies with large numbers of customers — like Apple, Google and Amazon — and sets rules requiring them to open up their platforms to competitors.

One of Cook’s issues with the law is that it would force Apple to permit sideloading apps on the iPhone, which is manually installing software from the internet or a file instead of through an app store. Currently, Apple’s App Store is the only way to install apps on an iPhone, which has made it the focus of lawsuits and regulators around the world. Apple has claimed that its control over the App Store ensures high-quality apps and helps prevent malware.

Cook noted that the iPhone’s market share in France is only 23% and said that permitting sideloading on iPhones would damage both the privacy and security of users, citing increased malware on Android phones versus iPhones. Google’s Android allows sideloading.

“If you take an example of where I don’t think it’s in the best interest, that the current DMA language that is being discussed, would force sideloading on the iPhone,” Cook said. “And so this would be an alternate way of getting apps onto the iPhone, as we look at that, that would destroy the security of the iPhone.”

Cook said that Apple would participate in the debate over the proposed regulation, and said that he thought that some parts of the DSA are “right on,” citing that it would regulate platforms with disinformation pushing issues like vaccine hesitancy.

Some projects never ship: ‘Failing is a part of life’

Source link

Continue Reading

Trending