Not Another Augmented Reality How-To

March 21, 2018 Caleb Garling

 

We’ve read stories today ad nauseam talking about augmented reality’s toolkits. The reality is that we’re all still learning how to optimize this nascent technology. So, rather than bore you with another overview of the different facets — I thought it better to weigh the considerations of applying AR to a fictional application:

You too can livestream from Machu Picchu with a terrifying 3-D llama.

Graphic by Built to Adapt,  Photo courtesy of Morgan Holzer

Imagine you run a popular travel app called Voyager and your competitor just landed a big write-up in the Journal. The market is getting hotter. Users use Voyager to book flights, find hotels, rent cars, and make reservations at fine dining establishments around the world, but you realize it needs that next leg up: a move that not only boosts user engagement, but also boosts engagement with partnering airlines, hotels, rental agencies, and restaurants — in that sense, boosts revenue.

After numerous chats with partners, users, and analysts, your product and marketing teams agree Voyager needs to get in the augmented reality game. You love this idea. Augmented reality (AR) is one of those technologies that’s been in the wings for a while and now with cameras and processors reaching futuristic heights, it’s here.

You’ve been tracking the heavyweights behind the trend. Facebook and Snap have embraced AR with reams of graphics for posts and messages. Apple released ARKit in June 2017, a developer toolset for augmented reality apps and iPhone X enables face tracking, which allows the phone or tablet to understand what the user is doing so the projected environment can respond appropriately. Google released ARCore a couple months later, their AR developer kit uses the phone to map the surrounding environment and understand lighting so virtual objects appear “normal.”

You could say there are five considerations in building AR apps: viewport, reality understanding, object placement, rendering, and interactions.

Each of these broad frameworks have their own respective considerations you’ll need to consider that make sense for your Voyager app. We’ll go through them below. We welcome your comments and additional considerations.

Through The User’s Eyes

If there is one consistent habit of travelers who’d use Voyager, it’s documentation, in all its forms. That’s strategic because one of the hardest aspects of app development is asking a user to do something new. People also want interesting documentation, a way to make their mark and share memories in a new way that stand out. We all know that shot of someone smiling above Machu Picchu.

You first decide how exactly how user will consume the AR assets: via live streams that go out to the user’s friends in the Voyager network. You’ve accounted for both viewport, the phone screen, and understanding part of the reality around them (more there in a second). The AR assets of course need to be relevant. You need to know where exactly on the planet the user’s broadcasting, so Voyager pulls their geo-coordinates. You’ve accounted for the camera and screen positioning: fundamental.

Now what to overlay. Your testing shows that any overlays on a user’s livestream must be far more strategic than squishy hearts and smiley faces. You develop a hierarchy between three types of AR assets: text, interactive images, and background changes. Each need to reference local traditions, cuisine, and history.

Your team starts testing. They are select. They only reveal the AR capabilities to users at geo-locations with the most photo shares, but also in locations with simple local references. Sunset Boulevard in Los Angeles would have too many potential types of assets — movie stars, skaters, rock stars, surfers, etc. But Loch Ness, Scotland would be narrower. Now you can test your classes of assets. Users have the option to livestream with 1) text scrolling through the picture on the history of Nessie’s legend, 2) Nessie herself cruising stealthily in the background or 3) with a time slider that shows the different castles and towns that have, ahem, nestled along Loch Ness’s shores over the centuries.

You get your results and your team has decisions to make. The text is simple, pulls from Wikipedia, and therefore costs nothing to maintain, but doesn’t get too much interaction with users (a touch boring). The graphics have good implementation but also would cost a great deal to maintain the libraries. The time sliders would require the most work considering the research, but has the highest implementation. For now, you prioritize a mix of graphics and time sliders with text as more an afterthought in the menu — and start deploying, testing, and iterating over additional locations.

Underpinning this testing was what kind of business information you could extract. How can your data team learn from the asset usage patterns? That’s the question you keep asking. At first, you’re going to make money by revealing offers from your travel partners. If a user is live-streaming at the Statue of Liberty, then you have them engaged with the app — something people rarely do otherwise with travel apps after they’ve landed in their destination — and can offer discounted tickets to a show or restaurant later.

The team is conscious of seeming invasive, but you could even customize the offer based on what graphics someone chooses. Maybe you only show a restaurant offer if they chose a New York Pizza Slice Monster to attack the statue like Godzilla — that kind of creative, but revenue orientated iteration. Maybe the more people share graphics using Voyager, the more they earn points towards other offers or fares. In the background your data science team keeps optimizing the models and incentives. Partner travel vendors could then sponsor graphics which slide higher up the user’s options. Now you’re creating brand touchpoints as you iterate on each of the best AR assets by user, location, and partner.

Of course none of the above matters if the AR assets don’t do one basic thing: look good. Humans have very sharp vision. Understanding the reality live-streaming around the user is critical. When an object falls into the uncanny valley or the lousy design valley, users flee. Nessie needs to glide past in the water, not the beach. Or consider Nessie’s shadow. The app needs to detect the direction of the sun and line them up. Our brains are so attuned to light and dark that anything clunky grates. Think about scale. How big is Nessie? How will you fit her height to people in the photo? Or Nessie’s reflection. Just a hint of how your assets interact with the space around them — a little glint in the water — can bring the whole piece to life.

But you don’t want to agree on a good idea in a conference room only to waste time building features no one wants. So tools like InVision, Framer and Origami can be used for prototyping, testing, and experimentation. Apple’s Xcode gives you the most native experience and allows you to place media into the augmented space easily (in .dae format). Xcode allows deployment on an iPhone and prototyping directly with ARKit. This is the functionality many developers need without all the complexity of building something from scratch.

But Voyager still has future considerations. The tech isn’t quite there yet. There’s no UI testing framework for ARKit/SceneKit and you’re still looking for a good test for geometric transformations in determining an object’s position, rotation, and scale. One of the cruxes of AR is making certain object placement is near perfect. Manual testing is necessary to ensure you’re not shipping something that renders broken. And as is the case with any development process, especially in AR, teams need to monitor what’s working and eliminate the waste. Users move through so many apps in their day. Taking time in AR can be a big ask. Everything you build needs to keep them engaged and help you grow.

Author’s note:

We picked a hypothetical travel app because it’s the simplest way to talk about basic building blocks. We obviously can’t capture every consideration in AR. A lot of AR is gaming, of course. According to TechCrunch, games account for over half of the three million-plus ARKit-powered apps and 62 percent of revenue in the App Store. But games are so variable — strategy, controls, environment — we kept it simple. We also kept it confined to the phone because building for HoloLens (Microsoft’s computer-based system) is a related but new ballgame. We also want to see more AR applications in business and operational apps. It’s such powerful technology developers should be exploring how they can bolster their own products with it. Get it right and you’ve reinvented your market’s playing field.

Caleb Garling is a freelance writer and contributor to Built to Adapt, whose has been staffed at Wired, The San Francisco Chronicle, and numerous other outlets.

Change is the only constant, so individuals, institutions, and businesses must be Built to Adapt. At Pivotal, we believe change should be expected, embraced, and incorporated continuously through development and innovation, because good software is never finished.


Not Another Augmented Reality How-To was originally published in Built to Adapt on Medium, where people are continuing the conversation by highlighting and responding to this story.

 

Previous
Good Pair, Bad Pair
Good Pair, Bad Pair

What makes a good pair?Comic by Angelina Marioni.Change is the only constant, so individuals, institutions,...

Next
Help Has Arrived: How PCF 2.1 Changes How You Look at Windows And .NET
Help Has Arrived: How PCF 2.1 Changes How You Look at Windows And .NET

Pivotal Application Service for Windows is an ideal place to run your .NET Framework apps. Find out three c...

SpringOne Platform 2019 Presentations

Watch Now