Cannes Lions

Google Cloud x New York Times - Picture what the cloud can do

INSTRUMENT, Portland / GOOGLE / 2019

Awards:

2 Shortlisted Cannes Lions
Presentation Image
Case Film

Overview

Entries

Credits

Overview

Background

Businesses everywhere are sitting on enormous amounts of data. But there is a profound gap between knowing something exists and doing something with it. We wanted to demonstrate that when used properly, their data could potentially alter the trajectory of their organization in ways unimagined before.

C-level executives and IT decision makers have many options when it comes to cloud technology, so we wanted to distinguish Google Cloud as the partner that enables enterprise clients to create, innovate, and show how the technology can translate into day-to-day operations and unlock commercial opportunities.

The brief was to create a unique, engaging environment that compels our audience to imagine the possibilities of bringing the cloud into their own businesses by:

-Demonstrating Google Cloud via a product demonstration

-Resonating with our core audience as humans, not just business people

-Educating businesses on what’s possible when they activate their inactive data with Google Cloud

Idea

For the past 100+ years, The New York Times has archived its historical collection of millions photos and news clippings referred to as “the morgue.” Partnering with Google Cloud, The New York Times has begun digitizing these artifacts with a custom tool that lets journalists easily explore these photos and glean new insights.

With the aim of inspiring prospective Google Cloud customers, we were tasked with demystifying the cloud by crafting a tangible, interactive use case around the New York Times partnership. To achieve this we created an immersive online experience using these artifacts to showcase how Google Cloud technology could enhance the New York Times’ storytelling abilities.

Supplemented with a web-based AR component, the experience puts the user in the shoes of a journalist—allowing them to select a photo from the archive, watch it being analyzed, and visually dive into stories extracted from data found by Google Cloud.

Strategy

From The Times’ historic photo archive of eight million images, we curated a set of evocative photos, digitized each, and then had them analyzed by Google Cloud technologies. Not only was the Cloud able to pull detailed, historical information from the front of the images (landmarks, significant objects, and locations), but it also pulled in-depth data from the back (photographer, date, and event). This data provided the backbone for our experience.

After culling the data taken from The Cloud, we began to interpret the information to find the stories buried within. These stories provided the basis for our digital experience and print campaign — strategically placed ads in The Times that showed the many hidden storylines. Via the website and an Augmented Reality-enabled mobile experience, users were able to explore each photo much like an archivist or journalist would do. In a direct and evocative way, we showed C-suite and IT decision makers how Google Cloud could take some of the most archival data in the U.S., and turn it into a wealth of new, useful information.

Execution

Over approximately eight weeks, we designed and developed a web based experience that took real photos from The New York Times’ photo archive and combined them with data derived from Google Cloud technologies to tell new stories that are “hidden” in the photographs. To pull people into the experience, we created a web-based AR application that could identify photos in special ads printed in the Times and on billboards and posters around New York, overlaying additional information and unlocking the related stories.

Each of the stories is squarely rooted in data found via Cloud Vision API and other public APIs and data sets. Emulating what a journalist could uncover in these photos, we use all of the available data points to turn these photos into in-depth storytelling experiences.

Within each photo, users are prompted to choose between multiple story threads. They are then immersed in the world of that photo through subtle motion and a custom soundscape. On scroll, each of the story touchpoints—in the form of archival photos, videos, audio clips, or document clippings—settle into view, bringing the data found in the original photo to life in unexpected ways. Along the way, a thin vertical line in the center of the page acts as a narrative and visual throughline, connecting each touchpoint in the story.

The AR component provides another way into the experience by allowing people to point their phones at the ads appearing in the newspaper, on billboards, or in NYC subways. The web app then analyzes the image on the user’s screen in real time, then launches the appropriate story based on the detected image.

Outcome

We created a data-driven campaign that worked as advertising, but also as an educational, engaging digital environment. The campaign launched with high-impact print takeovers in The New York Times and immersive transit takeovers all over NYC and Washington, D.C. Visitors were invited to step into the shoes of an archivist or journalist through the Augmented Reality-enabled mobile experience — by simply holding up their phones to the image, they could scan the ad and explore the hidden stories created the data unearthed by Google Cloud.

In three months, we saw a shift in brand perception. The campaign helped drive a 4% increase in unaided awareness and a 6% lift in the perception of industry leadership in Artificial Intelligence. From the beginning, sales were not considered a key performance indicator for the campaign. Nevertheless, Google Cloud saw a 1.34x return on investment, meaning the sales generated paid for the campaign and landed a profit over the life of the experience.

Similar Campaigns

12 items

Recheck Before You're Misled - Prebunking Campaign

GOOGLE INDONESIA, Jakarta

Recheck Before You're Misled - Prebunking Campaign

2024, GOOGLE

(opens in a new tab)