Learning From the Past with Location Based AR

Imagine 6000 acres of meadowlands occasionally crisscrossed with stout wooden fences and narrow roads. Apart from some monuments and statues, there is little here to indicate the crucial role these softly rolling hills in Central Pennsylvania played during the Civil War. Cemetery Hill, Little Round Top, Devil’s Den, Seminary Ridge…on these solemn landmarks the Confederate Army battled the Union Army at Gettysburg for three days, resulting in 50,000 casualties.

All of us at HitPoint Studios wanted to tell the story of what happened at this battle, generally considered the turning point in the Civil War. We had previous experience creating location-specific games, (including a game for Dreamworks and their Train Your Dragon franchise), and felt the time was ripe to combine geolocation, AR, and gamification to create a rich interactive experience at the most-visited battlefield in America.

Then Niantic announced the Beyond Reality Developer contest, and we pounced. It was the opportunity we were looking for. Niantic’s goal was to encourage developers to use and give feedback on their platform while exploring new use cases and gameplay mechanics. As one of ten finalists, HitPoint’s goal was to create an experience at Gettysburg that would engage the almost 1,000,000 annual visitors in new ways. (You’ll have to read to the end to find out if we won!)

Turning those empty fields into compelling interactive moments, while still maintaining its solemnity…it was beyond hard. We started by determining which Points of Interest (POIs) like monuments to include on our virtual map. We used geolocation to track player positions in real time. As the player walked towards one of the POIs, then selected it on the map, they were able to play mini-games and receive badges and points. Players could play solo or collaborate with others on a team to earn a high score on the leaderboard.

Visiting some of the monuments prompted a “citizen archeology” game, in which the player used a fun detector tool to locate pieces of an authentic artifact, e.g., Winchester Rifle. Beeping sound effects and an on-screen meter helped guide the player to the underground item. Once the dig site was located, the player could use a virtual pickaxe and brush to dig up, clean, and assemble the buried artifact.

But a true understanding of Gettysburg can’t be achieved simply by providing information and facts. We thought long and hard about how to invoke the emotion and pathos of the battle. Our solution was to incorporate a soldier on the virtual map who marched along a set route at certain times. If the player moved close enough to him as he passed, he gave the player a letter from an actual combatant. The letter unfolded and was read aloud, with 3D text appearing in AR on the horizon. “A fierce battle was fought here today…”

Once the letter was read, the soldier gave the player a virtual rose. The rose could be placed anywhere as a tribute; its GPS location was saved permanently. Players were able to see their own glowing rose as well as anyone else’s rose at any time.

Evaluating and testing an app when 3000 miles away from the location it was designed for was challenging. We “spoofed” monuments wherever we could around our office, (sidewalks, hallways, etc.) but that obviously wasn’t the same as actually being there. Fingers crossed, we visited the battlefield with the indefatigable Garry Adelman, Director of History and Education at the American Battlefield Trust, and had a blast trying it out with some enthusiastic visitors.

So how did HitPoint do in the Niantic Beyond Reality Developer Contest? We learned so much about the player’s expectations, what they found fun (and not fun), the critical importance of UI in augmented reality experiences, what’s good, (and what’s still baking in the oven), with the Niantic platform, and much more. But sadly, we didn’t win the contest. Not discouraged in the least, we’re now busily planning which historical and cultural locations to focus on next. Stay tuned!

Great Project-Great Partners

We are thrilled and humbled that the National Institute of Health is supporting us with our second Small Business Innovation Research grant. It really is a dream opportunity and team. The corporate leadership at Explore is smart, focused, and energetic. Academics from Purdue University are providing the theoretical framework and research results. And the project targets HitPoint’s sweet spot – challenging, scalable technology. We will be creating a multiplayer, collaborative AR platform with an open-ended style of learning, for teaching science concepts. I can’t wait to get started!

A quick video: https://www.youtube.com/watch?v=gHsGher7HGU&feature=youtu.be

More details here: https://www.eurekalert.org/pub_releases/2019-09/pu-na092419.php

How to Fix the Smartphone

Jonah Lehrer, my son and wonderful science writer, recently posted this blog. He gave me permission to copy it here. Between all of us, surely there is someone who can turn this eye-popping research into an actual product!

The astragalus is the heel bone of a running animal. It’s an elegant part of the skeleton, so curved it looks carved, with four distinct sides. It fits in the palm of your hand.

The astragalus is also one of the most common archaeological artifacts, found in ancient dig sites all over the world. The bones have been uncovered in Greek temples and Mongolian villages, Egyptian tombs and Native American cave dwellings. In Breughel’s masterpiece “Children Games,” two women toss astragali in the corner of the painting. They look like they’re having fun.  

The women tossing astragali in Breughel’s “Children Games”

The women tossing astragali in Breughel’s “Children Games”

Why are these small animal bones such a universal relic? The answer returns us to the peculiar shape of the astragalus. Because it has four sides, the bone can be used like dice: when thrown on a flat surface, it turns into a primitive randomizer, injecting a dose of uncertainty into the game. As the science historian Ian Hacking writes, these dice made of skeletons are so ubiquitous that “it is hard to find a place where people use no randomizers.” 

Of course, we don’t throw bones anymore. Now we have more advanced sources of randomness. Just look at slot machines, those money-sucking devices that enchant people with their unpredictable rewards. Although we know the games are stacked against us, we can’t resist the allure of their intermittent reinforcements. 

Or consider the smartphone. If the reward of slots is the rare jackpot, the reward of these devices is the arrival of a notification. As noted in a new paper by Nicholas Fitz and colleagues in Computers in Human Behavior, “In less than a decade, receiving a notification has become one of the most commonly occurring human experiences. They arrive bearing new information from or about a person, place, or thing: a text from your mom, news about Donald Trump, or a calendar invite for a meeting.” The ancients tossed animal bones to experience the thrill of random rewards. All we have to do is glance at these gadgets in our pockets.

There’s nothing inherently wrong with notifications. Unfortunately, their intermittent delivery (and the way they are constantly evolving to become more salient and sticky) creates a digital system that sucks up our attention, which is why the typical America spends 3 to 5 hours a day staring into small shiny screens. The end result is a permanent state of distraction, a mental life defined by its addictive interruptions.

Is there a better way? This urgent question is the subject of that new paper by Fitz et al. The scientists explore the potential benefits of creating smartphone notifications that are batched and predictable, arriving at regular intervals throughout the day. If our current smartphone experience is like a pocket slot machine, every random beep another reward, these batched notifications try to remove the twitchy uncertainty. We know exactly when the rewards will arrive, which will hopefully make them far less exciting.

To test the effectiveness of this setup, Fitz et al. recruited 237 smartphone users in India. Each of the users was randomly assigned to one of four conditions: 1) notifications received as usual 2) notifications batched every hour 3) notifications batched three times a day 4) or no notifications at all. The conditions were implemented using a custom-built Android app.

Screen+Shot+2019-08-02+at+4.13.36+PM.jpg

Which setup worked best? It wasn’t close—batching notifications into three predictable intervals led to improvements across a wide range of psychological outcomes. (Hourly batching was less effective, though it did lead people to feel less interrupted by their phones.) According to the data, those who got three batches reported less inattention, more productivity, fewer negative feelings, reduced stress and increased control over their phone. They also unlocked their phones about 40 percent less often.

Interestingly, silencing all notifications tended to backfire, boosting anxiety without any parallel benefits in focus. (People were still distracted, just by their FOMO, not their gadgets.)

This research comes with enormous practical implications. In a little over a decade, the smartphone has transformed the nature of human attention, consuming gobs of our mental bandwidth. It’s a consumption we often underestimate. According to Fitz et al., most people think they get about thirty notifications per day. The reality is far worse, with the typical subject receiving more than sixty beeps, pings and buzzes. But if you ask them how many notifications are ideal, they give an answer closer to fifteen. In other words, we desire technology with limits, a smartphone that shields us from its own appeal.

And this brings us back to the power of intermittent reinforcement. Randomness has always been entertaining. The difference now is that we’ve engineered a technology that’s simply too irresistible—software evolves far faster than our hardware—which is why we end up spending more time staring at our phones than we do parenting, exercising or eating combined.

But it doesn’t have to be this way. One day, a gadget maker will give people what they really want: a machine that doesn’t hijack the brain. Based on this paper, a core element of this future gadget will be a default notification system that delivers its interruptions in predictable batches. That text can wait; so can the update from the Times and Twitter; we don’t need to know who liked our Instagram in real time.  

Sometimes, less is so much more.

Fitz, N., Kushlev, K., Jagannathan, R., Lewis, T., Paliwal, D., & Ariely, D. (2019). Batching smartphone notifications can improve well-being. Computers in Human Behavior.

Surprise and Delight, Rinse and Repeat

Can you believe this video has garnered almost 1 BILLION views?

Ryan ToysReview is a phenomenon. Since 2014, “unboxing” channels on Youtube have changed the face of children’s television and altered the way kids discover new toys.  Mega star, Ryan, now 8 years old, generated 22 million dollars last year from his unboxing videos and has created a new toy and apparel line for Walmart that includes blind bag collectibles. Nickelodeon even announced a new TV show, called Toy Toy Toy – The Unboxing Show.

The viewer doesn’t know what will be revealed in an unboxing video, but they anticipate that it will be something they like. It is that anticipation combined with mystery that make these videos so compelling.  Humans love the unknown. The neurotransmitter, dopamine, is associated with pleasure of all sorts – sex, drugs, and video games – and delight spikes with uncertainty. Turns out we get bored quickly; the first taste of cake tastes vastly better than the third. Habituation to stimuli is a key attribute of our nervous system. In contrast, if you see something surprising or unknown, your attention becomes immediately engaged. (This effect is obvious even in infants who pay more attention to an “optimally scrambled” face than a normal one.) Neuroscientists call this a “prediction error.” It’s the delight we never saw coming.  It doesn’t matter if it’s a loot box in a video game or one of Ryan’s unboxing videos. These mysteries excite our dopaminergic system, which is why we like them.

Toy creators paid attention to the burgeoning popularity of unboxing videos and two years after Ryan first burst onto the scene, MGA introduced the L.O.L. line of toys. The company wanted to cash in on the unboxing and collectibles trends, and so it came up with more than 250 dolls whose identities are hidden until unwrapped.

L.O.L collectible dolls

How does it work? https://www.youtube.com/watch?v=wwJbQx0a8GM  The L.O.L. Surprise! Pearl Surprise toy boosts of “3 layers of surprises.” First you open up the six plastic pearls, each containing clothing for the LOL collectible dolls. Next, you unwrap a giant seashell covered with sand, then place it in water. It fizzes for a few minutes, until all of the “sand” falls off. Amidst some lovely pastel colors generated by the transformation, a plastic shell emerges. You open the shell, and there are three “surprise” bags inside containing, finally, the dolls and accessories. The transformations and surprises keep coming. Even the dolls hair and skin changes colors when dunked in water. (The down side is that all this packaging stuff is filling up our landfills!)

Like other similarly successful toys, L.O.L. dolls combine three key elements: (1) Surprise within a known collectible universe. The child can anticipate the type of toy but not the specific one in the collection, thus holding out the promise of obtaining a “rare” specimen; (2) Multiple surprises. The toy is teased through layers of unwrapping, with delight and surprise along the way; (3) Toy transformations. The child is required to take an active role in the physical transformation of the toy, e.g., adding water to the seashell.

And just so you don’t think that these toys are only for girls, here’s another example. Grandson Izzy loves Dino eggs.  He prefers to hack at the egg shell with a plastic pick to reveal the specific dinosaur inside, but you can also submerge it in water and watch the shell fall off in the fizzing water.  Voila, a triceratops!

It’s genius, combining anticipation and mystery with actions that reveal further surprises. By emphasizing the process and making the child feel as if she is contributing to making the product, the child has an immediate sense of ownership. The child is the creator, not just a purchaser. It feeds their curiosity and that strange pleasure that comes from pursuing the unknown.

HitPoint’s Fairy House

How does all this relate to Augmented Reality, my primary interest these days? I believe that AR is the perfect medium for combining the power of mystery and subsequent revelation with a physical toy. At HitPoint Studios, we created a prototype of a toy fairy house and mailbox and then combined it with virtual fairies. The child receives virtual messages from the fairy queen, suggesting mobile AR games and activities to do in and around the fairy house. Over time, more and different fairies appear as the child carries out activities to make the fairies happy.  Likewise, the physical fairy house takes on new virtual features, like furniture, lights and sounds.

HitPoint’s Fairy House combines features of some of the bestselling toys described earlier. The mysteries of the fairy world are revealed gradually, through the child’s actions, thus causing virtual transformations (rather than physical) in the toy and surrounding environment. Surprise and delight are key elements to every interaction. Unfortunately, HitPoint hasn’t successfully sold a toy manufacturer on our vision…yet, but we have high hopes that once a company has success with this approach (LEGO?), more companies will follow.

Multiplayer AR – Good, Bad, Ugly

AR is an impressive technology but until the launch of ARKit 2.0 last year, single-user experiences were the norm. Unfortunately, these early apps neither retained customers, nor monetized. (We know this from first-hand experience!) Since then, multiplayer apps, where people can share a virtual experience in real time, have gained steam. Players can affect the same environment and see different things depending on where they are positioned. Just as in gaming generally, social playing is simply much more fun.

Recently HiPoint Studios partnered with Caesars Entertainment in its first foray into multiplayer AR gaming. We created a series of mini-games for a new bar experience in The LINQ Hotel in Las Vegas. Using the bar coasters as targets, five different AR flick-style sports games (baseball, football, basketball, hockey, and beer pong) can be played, with high scores posted on an online leaderboard.

One of the games, beer pong, is a multiplayer game. And here, let me go straight to the punch line. Both the design and engineering of multiplayer AR games are super challenging but using target-based AR considerably reduces the risks.

Beer pong has a well-understood gameplay mechanic, so designing a two-player version around a bar coaster was not difficult.  However, designing multiplayer AR that is NOT target based can be hard because the players have complete control of their camera and, depending on the gameplay style, may need to know what the other player is seeing and how it is different from their perspective. Imagine a multiplayer AR combat game where your opponent is hidden on the roof, not visible from your orientation? How does a designer encourage player movement, perspective shifting and mental rotation, when necessary? The UX challenges can be monumental.

Regarding technical hurdles for multiplayer AR, relocalization is one of the toughest nuts to crack. Until world maps could be saved across multiple sessions, and transferable between devices, users could not revisit a location and find that the app remembered it. It also meant that AR experiences were generally always solo ones. Then Apple introduced relocalization in iOS 11.3, which let users restore a state after an interruption, allowing them to relocalize a world map to a later session, or share to another user or device. There are some highly complicated visual computing systems to support this process, e.g., SLAM map coordinates, but relying on a common physical tracking marker image or QR code, as we did in the Caesars game, neatly solves the problem.

For the marker to work, both players point their phones at the coaster on the table in front of them; the app treats the marker as the origin coordinates, making the real world and the virtual world consistent across both phones. This works quite well and removes some low-level problems of figuring out where the players are relative to each other. Instead, the device makes assumptions based on where they are relative to the coaster.

Meanwhile, target-based multiplayer AR has an obvious disadvantage. While it works quite well in a specific location like a restaurant or office, only certain kinds of games or experiences lend themselves to playing around a coaster or beacon. It’s definitely a good short-term fix for multiplayer AR, and a great way for our company to “get its feet wet,” but obviously only the beginning.

What’s next for the team at HitPoint, now that the Caesar’s LINQ game has launched? Given our extensive experience with the now defunct Tango phone, our engineers are particularly excited about the Cloud Marker tech from Google that uses recognized landscapes for relocalization (similar to the ADF scans that we used with Tango for some cool AR retail demos.) Can’t wait to see what the HitPoint team comes up with next!








A Year of “Firsts”

Happy Holidays 2018!

Last year around this time I announced that I was joining HitPoint Studios as President, after 20 years at Legacy Games. It was a momentous decision, made easier by the fact that Paul Hake, HitPoint’s CEO, and I had worked together on numerous projects previously.

So how’s it gone? 2018 was, simply put, a year of “firsts,” both for me and HitPoint Studios. Our goal was to challenge ourselves with new, interesting, and technically complex projects. (I think we may have overachieved!)

1 – We launched our first real-time multiplayer mobile game last week, for none other than Ellen Degeneres Digital Ventures. Loosely based on her popular daily TV show, Game of Games, we are proud of the app’s gameplay and polish, and will continue to build on its success next year.

2 – We developed our first  Remote Gaming Server (RGS)  – a cross platform, cloud-base data management platform for casino games, with extensive social features. It is now fully licensed and operational in Europe. We will be adding games to the RGS throughout 2019.

3 – We created our first multiplayer, location-based AR game for Caesar’s.  PlayLinq is a collection of AR sports mini games (baseball, football, basketball, hockey, beer pong) designed to be played in location at the LINQ Hotel in Las Vegas. (BTW – Multiplayer AR is NOT for the faint of heart!)

4 – And switching gears…we designed our first smart toys, starting with an adorable fairy house and virtual fairies, and continuing on with AR controlled vehicles (super-secret project).

5 – We collaborated with Explore Interactive to develop our first educational AR game, designed to teach kids about electricity.  It includes a series of “challenges” to build specific types of circuits, along with an open-ended “create” mode.

And if that doesn’t sound like a sufficient challenge for our 25-person team in MA and CA, in 2018 we also (a) continued work on the Disney Magic Timer app, adding enhanced AR plus a robust Content Management System, (b) launched a beautiful multi-platform adventure game, Adera, (c) developed a second Facebook Instant Game and (d) ported Crayola Color Blaster from ARKit/ARCore to the Lenovo Mirage AR. 

We’ve learned a lot this year, some of which I tried to capture in blog posts here and when speaking at conferences. There have also been plenty of bumps and bruises, as well as some magical moments. Reminds me of playing a well-balanced game, when the next move seems to be within reach, but you have to stretch to get there.

We’ve been stretching a lot this year at HitPoint, and it feels good.

Wishing you peace, good health, prosperity, and your own satisfying challenges in 2019.

 

 

 

 

 

 

 

 

 

 

 

 








Augmented Reality and Schools – What’s Next?

What determines if an EdTech product is successful? The criteria are daunting. (1) It must teach something faster or better than is possible otherwise. (2) It must be relevant to the curriculum and teachers’ needs. (3) It must be affordable and accessible to all schools and students. (4) Ideally, it engages the student through experiential, discovery learning rather than rote memorization.

How hard is it to create this mythical piece of technology? Well nigh impossible, if looking at the wreckage of past innovations is any indication. I’ve written previously that the most successful attempts have been those hardware and software products that could be used by teachers in a flexible fashion, as tools to teach a variety of different subjects. Computers, tablets, Hypercard, Microsoft or Google office software, programming languages, graphics apps, etc. all make the list of successful technology innovations in classrooms. Their precise level of success, of course, depends on how well they are implemented in the classroom.

Now Augmented and Virtual Reality are headed for the classroom. How are they faring? It is difficult to justify the use of VR in the classroom, given the cost of most hardware. And while Google Cardboard is affordable, it is limited to presenting pre-rendered visual experiences in 3D. Is it that much more effective than watching a YouTube video? I doubt it.

And what about Augmented Reality apps for the classroom? I’m admittedly more of a fan of AR than VR. In its best incarnation, AR incorporates the real environment as part of the experience rather than create an alternative world as in VR. Some types of AR are possible with low end mobile phones, so cost isn’t necessarily an issue.  However, judged against the success criteria I listed initially, AR is sorely lacking.  It is not clear that most of the curriculum based AR experiences are more effective than other teaching methods.  And while there are some open-ended tools for creating AR experiences (e.g., Blippar, Zappar), they are focused on the consumer market, and not geared to school use. Eon Reality and Jig Workshop both have an authoring tool for teachers, but they are limited in what kinds of interactions are possible with pre-packaged 3D objects.

Meanwhile, AR is being rapidly implemented in industry, with use cases that would be equally helpful in K-12 schools and colleges.  Businesses are challenged with training millennials to take the place of retiring Baby Boomers, and they are finding that Augmented Reality is the ideal tool for any kind of procedural learning. Workers are  taught how to fix or assemble complicated equipment using step by step procedures that appear directly on machines. Similarly, workers are learning advanced manufacturing techniques, with operational guides turned into 3D displays.  Real time collaboration as well as asynchronous communication are possible using AR, with the remote expert providing help and guidance by leaving messages or drawings in the relevant location for workers.

How would that type of functionality fit into the classroom? Well clearly (and I’d like to be on the team that creates this), this mythical app would need to be designed specifically for schools, and include all the administrative, networking, and privacy functionality required. it would greatly facilitate distance learning.  A remote or in-classroom teacher could look at what a student is working on, e.g., 3D printing, robot assembly, chem lab, shop, frog dissection, circuit builder, etc. and write or speak comments, displaying instructions directly on the objects in question, in real time or asynchronously. Alternatively, two or more students could work collaboratively, in the same location or remotely, using this AR technology.

Some innovative developer is going to take the collaborative AR functionality being used successfully in industry today, and build a similar platform for high schools, technical schools, and colleges to engage the learner in new ways. I’d love to see a STEM-oriented tool for students to practice the engineering design process from problem to prototype while seamlessly collaborating and troubleshooting with the teachers and peers using AR.  It’s coming, and I, for one, can’t wait for the future to get here!








Augmented Reality in the Classroom – Lessons Learned

Many companies with glitzy AR apps have come and gone; many educational use cases have been tried and discarded. Is Augmented Reality just the latest in an endless stream of educational technologies that don’t add up to more learning?

Maybe. Beautifully rendered virtual images of the periodic table and human anatomy or spinning globes have failed to become standard features of the school curriculum. Volumes have been written about why, but it comes down to cost, convenience, and ROI. Educational AR apps tend to be difficult and costly to create and must compete with traditional “supplemental” school teaching aids. Students generally have to look through an expensive smartphone to see the virtual graphics and animations, making the interface awkward. While the initial experience engenders surprise and delight, there is little need, or desire, to repeat the experience. It’s also not clear how seeing a dinosaur “come to life” from the page of a textbook actually increases learning.

If you look at the history of educational technology, the technologies that ultimately earn a place in classrooms are used as tools first, rather than to directly deliver curriculum content. Consider Apple’s Hypercard, introduced in the late 1980s. Teachers loved it and similar products because they were easy to integrate into any subject matter. They turned kids into active creators rather than simply consumers of content. Plus they exposed children to more “real world” uses of technology, better preparing them for future jobs. Tools like Hypercard did more than anything else to ensure a permanent place in classrooms for computers.

The first interactive product I ever produced was Children’s Writing and Publishing Center, published by The Learning Company. It allowed kids and teachers to easily create newsletters, brochures, etc., similar to PrintShop but specifically geared to educators. It was a huge hit. My next product was another language arts tool, Mickey’s Crossword Puzzle Maker. Using Disney characters, kids could create and print out their own picture-word crossword puzzles. It was already clear, back in the 1990’s, that open-ended, tool based learning was the best way to insinuate technology into schools.

There are, of course, many tools designed to help educators create Augmented Reality content and experiences. The more popular AR authoring tools that don’t require programming are Zappar, Blippar, HP Reveal, and Metaverse. But even the simplest of these AR creation tools are made more difficult because they require the ability to make, access, and manipulate 3D graphics. (That’s the virtual part!) There is simply not a great option for K-12 schools, i.e., one that is easy enough to be used by teachers and students but sufficiently sophisticated to support mixed reality experiences that go beyond simple target based 3D pop-ups.

My sense is that the path that Google has chosen to win teacher acceptance for its AR/VR technologies will turn out to be the most productive. Google Expeditions is a great start, but students are still consumers of content, not creators. However, with Tour Creator, Google’s latest AR/VR product for schools, students grab scenes from Google Street View, and compile them into a Powerpoint-like presentation with text and voice. Neatly solving the dilemma of where to get and how to manipulate 3D assets, Google’s Poly and Street View provide 3D graphics and 360 video for most locations. Soon, Google says, students will be able to view their tours created with Tour Creator in their Expeditions app and on new Chromebooks that now include ARCore, Google’s AR platform.

But there is much more that can be done. Why not…

  • Add a story creator capability to Tour Creator, so students can add more and different kinds of assets and content to their tour?
  • Create open-ended AR tools that are designed for specific subject areas, like chemistry or physics? Augmented reality could be used to highlight the reaction when you mix together chemicals to form a solution.
  • Design AR tools for real-time remote communication? Businesses are rapidly implementing AR to train workers on the repair of complex equipment. Why not use AR similarly to help students building projects in Maker Labs?
  • Develop an AR game creator with a variety of templates that can be customized or a quiz generator that uses object recognition to teach foreign languages?

We’re at the Hypercard inflection point with Augmented Reality. Now we just need the right tools to convince teachers that this is an educational technology worth having.








A True Turing Test

When the first smart speaker was announced by Amazon in 2014, I was unambiguously enthusiastic about its potential with kids. I immediately approached Amazon, then later Google about creating skills/actions for Alexa and Google Home. Sadly, I was told by both companies that they weren’t interested in children’s content because of privacy/COPPA concerns. Since my company couldn’t count on support from the platforms, and there wasn’t any way to monetize our development efforts, I didn’t pursue the opportunity.

Fast forward to today. Amazon’s Alexa just released a new child friendly device, called Echo Dot Kids with a monthly subscription plan and parent friendly features. Google Home has a huge library of family-friendly content. What accounts for the turnaround? Late last year the Federal Trade Commission revised COPPA to essentially look the other way when companies collect voice recordings of children under the age of 13, citing that the “FTC would not take an enforcement action,” as long as companies use an audio file to transcribe a command and then immediately delete it.  Adding fuel to the fire, Amazon and Google have disclosed that families love the device, describing parents as “voice-assistance power users.”

Smart speakers, (also known as Voice Assistants/VA), have enormous potential with kids. They use Speech Recognition, Natural Language Processing, Artificial Intelligence, and Machine Learning to understand what is being said and how to respond. They engage and empower kids with endless audio interactivity. Smart speakers support a natural way of interacting that doesn’t require reading, starring at a screen, or interpreting interfaces. Unlike parents glued to their cell phones, a Voice Assistant always reacts to the child and says something, even if it isn’t actually an answer to the question they asked.  And there’s the rub.

Children neither speak nor think like adults.  What are some of the unique issues that arise when designing voice interactions for kids?

  • To begin with, we don’t really understand what children are thinking when talking to a disembodied voice. What is their “theory of mind” as they stand in front of a Google Home or Echo device?  This charming 2017 study from MIT Media Lab, “Hey Google is it OK if I eat you?”   revealed significant developmental differences. A younger child (4 years old) treated Alexa like a person, asking questions like “What’s your favorite color?”  and “How old are you?” Older kids (7, 8, 9 years) treated it more like “a robot in a box,” believing that the smart speaker could learn from their mistakes. Knowing that young children think of the smart speaker as a human gives one pause. Will they “take it personally” when the voice assistant continually responds incorrectly to a query?
  • We know that memory for audio-only information is relatively poor among kids, especially compared to reading or watching a story. This was one of the key findings of my doctoral dissertation research many moons ago and should obviously be considered in the design of audio only interactive content. Despite that, the first skill I helped design was a simple branching detective story, in the spirit of Encyclopedia Brown. We conveyed hints about “who dun it” throughout the story. Turns out there were way too many puzzle pieces to hold in short term memory. We realized that we had to write a simpler story, break it up in much smaller bits, and continually reinforce important information.
  • Egocentrism is a key feature of childhood. Kids have difficulty putting themselves in the place of another person, seeing the world from another perspective. It is not until children are about 11 years old that they begin to accept the limitations of their knowledge and understand that their knowledge is not the same as others. Asking a question of a Voice Assistant like Alexa, Siri, or Google requires them to adhere to some fairly strict conventions and to understand that they may have to adjust the way in which they ask a question in order to be understood. For example, when I asked, “How does an ice cream truck sound?” I got a charming response from my Google Home device (try it!). But when my four-year-old grandson asked what he thought was the same question, “Play me an ice cream truck,” we ended up with some rap music on Spotify.  Getting Izzie to rephrase his question, to think about how to ask it differently in order to be understood by Goggle, was a tall (impossible) order.  When we received a complicated answer from Wikipedia in answer to a question about cement mixers, Izzie walked away.

Creating a smart speaker/Voice Assistant for kids that can understand and respond satisfactorily strikes me as a true Turing Test, and one that we haven’t yet achieved.  While adding kid-centric skills/actions is a good first step, smart speakers are still more frustrating than they should be, both in understanding children’s speech and intent, as well as in their responses, or lack thereof.








The Five Year Pivot


“Ode” was created by Vincent Carrella, www.vincentcarrella.com

I was thinking about how to summarize this podcast about my life and career, and kept coming back to “Shit Happens, Pivot.”  Basically I founded three companies in the interactive business, then went through gut-wrenching change about every five years in order to stay solvent. There were no roadmaps, no mentors…so naturally I made every mistake possible.

Oftentimes pivoting was necessary because of changes to hardware and operating systems.  We not only needed to change our development tools, especially in the pre-Unity days. As we discovered with the transition from PC to mobile, along with new hardware comes changes to  distribution channels (app stores) and business models (free to play). This affected game design as well as game genres, and required a complete mind shift to game services. In other words, most of these pivots were excruciatingly difficult.

You can listen to the podcast, or if you are interested in the abbreviated version of the twists and turns, so far, in my 30-year career, here goes.

  • I pivoted from academia to business, because i couldn’t stand the politics of university life and I wanted to make a difference. Doing memory research with nonsense syllables didn’t seem like the way to “put a dent in the universe.”  (Steve Jobs)
  • I went from being a design consultant to creating my own products, and completely underestimated how difficult that would be. Let’s face it…anyone who comes up with a character called “Mutanoid” deserves to fail!
  • I developed CDROM products for kids, until that business imploded at retail. Turns out, selling games for $0 isn’t a sustainable business model. Reminds me of the poor health of the kids mobile app market today.
  • I pivoted to digital distribution and decided to target women customers. They weren’t playing a lot of games at the time, but I knew women (like me) loved detective TV shows and books. I figured that if I just created content that was more appealing to women, they would come. Serendipity struck, and I sat next to Dick Wolf (the creator of Law & Order) at a benefit dinner. I persuaded him to make a game, the first one licensed by NBC Universal.  We went on to develop and publish many more games based on TV shows, like Murder, She Wrote and Criminal Minds, and distributed them through downloadable portals like Big Fish Games.  It was an excellent business while it lasted, but after about five years, smart phones came along and decimated sales.
  • How hard could it be to make mobile free-to-play games? The game mechanics seemed simple enough (e.g., match-3), the graphics were mostly 2D, and the app stores handled the financial transactions. We got lucky early on, with a mobile game called Atlantis: Pearls of the Deep. Google loved it, and promoted it heavily. Quickly we had a couple of million players. And then what did I decide to do? Instead of continuing to develop content for our game, update it and service our existing customers, I made the classic mistake. I assumed we should immediately start work on the sequel. I clearly didn’t understand the “games as service” model, and threw away the best chance our company had of building a robust  mobile business.
  • While still designing and developing games for mobile, I next pivoted to Augmented Reality. I was intrigued with the technology, plus it was obvious that there would be lots of opportunities in education, in addition to gaming. We won a Google competition to create a product for their new AR platform, which launched us into the world of computer vision. We needed help, however, and the decision to pursue this challenging new technology necessitated further changes that ultimately led to my joining HitPoint Studios, where I am currently, happily, President.

One thing I must make clear before closing. While I made all of the mistakes chronicled here, I have been truly fortunate in the people I have worked with over the years. Some amazingly talented, and kind people, without whom I’d have no stories to share.

As Stanley Kunitz concludes in his wonderful poem, The Layers:

Though I lack the art

to decipher it,

no doubt the next chapter

in my book of transformations

is already written.

I am not done with my changes.