This Deep Learning Technology Is Redefining Ad Integrity

  • Post author:

By Russ Banham

Perspectives

In the pre-social media era, marketers had a good idea about how many times consumers saw their corporate logo and other brand images. Then, around 2010, after social media sites took off, millions of people could see a company’s logo, unbeknownst to the organization. That was both good and bad news.

While exposing people to a company’s logo could, in theory, make them more familiar with the brand, products, and services, there was also the possibility the image could be put into a negative context and go viral. Until recently, marketers had no way to monitor the use of their advertising images by third parties.

Today, deep learning makes this possible. Computer vision, a subset of deep learning technology, gives marketers insights into how many people have viewed a logo or other brand images, in addition to their context.

“The use of computer vision for marketing purposes is becoming an increasingly common application of deep learning technology,” said Clement Chung, director of machine learning at Wave Financial, a provider of integrated software and tools for small businesses. “Companies can see just how they are being represented in online images, both positively and negatively.”

Ad Impact, Across Mediums

Deep learning is a subset of machine learning, itself a subset of artificial intelligence, in which computers are instructed to learn by example. For instance, in self-driving cars, the car’s computer is instructed to stop at a red light. Once the car is programmed to do so, the machine will know to stop at all red lights.

Computer vision technology, then, uses object recognition software to tabulate how many times an ad or logo has been viewed in social or traditional media, and the context in which the image appeared.

“The technique involves two parts—the development of an algorithm to train the computer to find the image, and then the use of object recognition to determine the context of the image,” said Chung.

This has to two positive effects for marketers. To begin, it helps them identify how many times an ad was viewed outside of its original distribution campaign. If we were to consider a Dodgers Stadium beer ad, for example, computer vision technology could provide a way to calculate how many times the billboard at the game was seen both on social media and traditional media. If the game is televised, chances are local and even national news stations may also carry images of the event in which the billboard ad is visible.

Detecting viewership can help marketing teams decide where to put ad dollars. “What if the ratings on the televised event have dropped significantly, meaning fewer people are seeing it at home?” Brian Kim, senior vice president of product at GumGum, a leading computer vision company,said. “This might convince the marketer to put its spend elsewhere.”

Using computer vision, companies can calculate if more people saw the image on social media or other media than the TV ratings indicate. “That’s a far better determinant of the advertisement’s value,” Kim noted.

Chung agreed, stating that a marketer may also discover things like a billboard placed behind the catcher was seen by more people than one situated in left field. “The goal in all cases is to get a bigger bang for your advertising dollar,” he said.

An Opportunity Algorithm

The technology also identifies missed opportunities and flags necessary damage control. If the context is positive, the company has the opportunity to push the advertised image toward becoming viral.

However, there is also the risk that an image of the brand or its logo could become a “meme of the worst kind, used for satirical purposes,” said Chung. “In such cases, the product’s brand value can quickly erode, especially if the marketer is unaware of the negative associations and is too late to do anything about it.”

Computer vision offers a way to be notified in real time about insulting brand imagery. “An algorithm can be created to spot the use of certain offensive words that accompany the image on social media,” Chung pointed out. One obvious example, he noted: “This product sucks!”

With the emerging technology, marketers have the ability to counter the offense. “There have been memes created by overworked millennials and teenagers fed up with too much homework where they’ve snapped a picture of themselves ‘drinking’ a household cleaning product—the ‘my fake suicide’ kind of thing,” said Kim. “With computer vision tools, brand managers have instant access to what is now a negative trend to quickly adjust the conversation in a more positive direction.”

Of course, computer vision can also help spot and seize marketing opportunities. For instance, GumGum has created a way for marketers to run an advertisement at opportune—often fleeting—moments in social media.

“We’ve developed a contextual relevance algorithm using object recognition software that can pinpoint, for example, when happy images involving humans and cats appear on social media,” said Kim. “Say this image appeared on CNN. We now have the opportunity to stick a banner ad for a national pet store chain into the image in real time. We would receive income from the pet store chain to display the advertisement and arrange for CNN to be paid a portion of the earnings.”

The technology is also able to train the algorithm to find a context in which a brand should appear in certain images, but doesn’t. With the beer company example, this might include adding logos to images of people photographed at parties, restaurants, or taverns where they know the beer is served.

“Computer vision can find images that fit the marketer’s desired demographic, and if the brand is not evident in these images, the information is nonetheless insightful for marketing purposes,” Chung said. “The company now has better intelligence on where to put its marketing spend.”

Russ Banham is a Pulitzer-nominated journalist and author who writes frequently about marketing technologies.

Leave a Reply