By Dr Lizzy Steenkamp
When Articulate Storyline released the 360 interaction feature, I was sceptical. I’ll admit, I couldn’t imagine a use case where it would add real value to a learning experience, and the example projects people were posting didn't convince me otherwise. But then we started seeing opportunities to create 360 interactions of our own, and we had to have a good think about how to make it worth the effort.
This is the story of one such project where we collaborated with OPUS, an organisation in the UK that trains care workers and other professionals on the safe administration of medicines. I’ll explore the objectives of the project, my criteria for an appropriate use case for a 360 interaction, how we created our 360 interaction in Storyline, and the degree to which we managed to make this interaction accessible.
It’s an interesting case. We had some unexpected results, and if you get to the end of this story, you’ll share in the value we gained from this experience.
First, a bit of context. We were tasked with creating an online competency assessment for folks who administer medicines. The typical learner is in charge of giving the persons in their care the right medicines at the right time via the correct route. A pretty high-stakes project, if you think about it. One might even say life or death. OPUS is the right organisation to tackle this challenge, as they are led by pharmacists with decades of experience in training all kinds of professionals in the safe administration of medicines. Check out their offering here.
Naturally, because this is a competency assessment, it was important for us to mimic real-life scenarios as closely as possible. In these scenarios, the learner would apply their skills as best they could, and receive a report that outlines areas for further development. Many of these skills involve recognising which procedure is appropriate for a situation and following that procedure correctly.
But, given that people’s needs and bodies change every day, the role also requires exceptional attention to detail. Even if Ayesha received the same medicine every day via the same route for 40 weeks, her dose might change the next day. Learners thus need to be very observant of changes in the environment or on objects such as medicines labels. This was our first clue that giving the learner a virtual space to explore might have an important role in this project.
There are many ways to achieve a project's objectives, and as we like to do pretty perfect work, we had to think through our options. How did we decide a 360 interaction could be the best solution? It was the interplay between a range of factors, including:
Let’s explore each of these elements in turn.
We wouldn't use a 360 interaction if this could make life difficult for learners with accessibility needs, so we researched our audience. Although the role doesn’t require perfect vision, care workers are typically expected to physically compensate for the disabilities of the persons in their care; they stand in the gap between what people need and what they are able to achieve on their own.
We thus had confidence our learners could see, but we did want to account for those who might use a screen reader to help manage challenges with visual processing. We also wanted to provide keyboard navigation for learners with mobility challenges. We tested these ideas and were soon confident that we could create a 360 interaction that meets the accessibility requirements of our learners.
Before you start designing a 360 interaction, first ask whether the relevant skills are actually applied in the ‘360’ or corporeal world in practice.
Are you teaching digital or abstract skills, like using Sharepoint or having difficult conversations? Then perhaps no one needs to be able to look around the space for the activity to be meaningful, and a 360 interaction won’t add enough value to justify the budget. There’s also a good chance some of your learners will feel you’re wasting their time.
As OPUS trains care workers in the administration of medicines, and administering medicine is a physical job, the 360 interaction would add value to the assessment. It could provide an authentic simulation of some of the learner’s important tasks.
Consider whether an exploratory approach, where a learner is in control of what they focus on, would make sense. If it's better to guide the learner’s focus, for example to create linear sequence, it might be better to use a more linear format, like video.
The interaction we had in mind required that the learners inspect different objects in the room and analyse the relationships between them. This would allow us to test whether they can pay attention to those important details on which they were taught to focus, and use those details to recognise procedural errors. So, exploratory navigation was a win, for us. It would also aid in achieving our other goal, which was to help immerse the learner into the context. More about that next.
An element of the job that consistently challenges us in our healthcare projects is how the learner’s emotional responses can impact their ability to make decisions or observe details. An authentic experience minimises the learner’s awareness of its artificial nature and helps them to forget they’re engaging with a computer. And although complete suspension of disbelief is not achievable within the budget of the typical e-learning project, we can try our best to approximate that by telling a good story, and immersing the learner as much as we can with powerful, familiar imagery.
In a standalone assessment, you don’t have much time to situate the learner. You want them to be able to start the assessment while they can still concentrate. A 360 interaction could serve as an economical way to achieve this sense of immersion.
Free navigation forces the learner to look around, observe details, and make a few decisions based on what they are most curious about, which are all things that help with immersion.
But, it's also important that the environment and objects we encourage them to explore feel familar. Our ability to create an authentic environment depended on some key factors. One of these is how many of our learners have shared expectations about what the space, objects, and characters should look like, which we’ll explore next.
We wanted to avoid a sense of ‘fakeness’ with our interaction, as this pulls the learner out of the experience and goes against our aims of immersing them. Something feels “fake” if details are inaccurate: the texture of a wall, the shadow of a chair, or the curve of a skirting board.
Pursuing immersion is pursuing authenticity. What does it look like, how does it feel, what are they being asked to do, and how well do these elements match the learner’s lived experience?
Well, the first challenge all disciplines face when trying to create an immersive experience is that people’s lived experiences are about as subjective as it gets. So, the broader your audience, the lower your chances of finding common details that would be familiar to all. If your audience is not narrow enough to have shared experiences of similar spaces and objects, you might be hiking up the down escalator. To create spatial familiarity, your learners not only need to share a specific job role, they may also need to work in a shared geographic area.
In our example project, we could just about force this to be the case. Although our learners all worked in similar roles, we did have to split our audience in two and create separate 3D rooms for those who work in schools, and those who work with adults. The buildings look quite different, as do the characters. So, we allowed our learners to select their setting, and then we could show them the virtual room that would be most familiar to them.
Luckily, our learners were all located in the same country, so we could use reference images from schools and care facilities in the UK. There was no risk of distracting a learner from equatorial South America with radiators, or a learner from South Africa with an absent battery pack next to the computer (#save-our-grid).
Once established how to meet our learner's spatial expectations, we could shift our attention to the second tine of our authenticity fork. This being our ability to create environments with accurate details that feel familiar.
The key obstacle here: we’re in South Africa, the learners are expecting UK care homes, and there was no budget for a custom set.
So, taking a photo with a 360 camera was out of the question. In addition, even though the realism is hard to beat, the equipment can be costly, and any big changes that need to be made down the line may even require a reshoot. Post-production would be a wasteland of dead ends for any late ideas.
Another downside is that since the camera sees in all directions, your tripods, lights, and in some contexts even the photographer, can’t be hidden out of the frame. These things can be removed in post-production, but this adds costs and delays. Not the easy process you were hoping for.
Finally, taking the photo with a 360 camera also requires some finesse when it comes to lighting. Remember, whereas we can place lights behind the camera for 2D photography, there is no “behind the camera” for 3D photography, so taking a photo indoors with adequate lighting is a real challenge. So, we had to get a little creative.
If you’re anti-360-camera like we were (for this project) there are two broad paths you can go down to create your 360 room. The first option is to take a bunch of 2D photos and stitch them together, which we already knew wouldn’t be the full solution for us since securing the right space was not possible. The second method is to build the environment digitally using 3D software, and rendering the 360-degree views.
A rendered image is easier to manipulate and iterate over time, and doesn’t require a perfect physical location. There is, however, one big downside to the 3D render: it is very difficult and costly to create realistic-looking characters using this approach. Since empathy with the characters was an important objective, an uncanny valley character was just not going to do the trick.
So, we opted for a middle ground and decided to take 2D photographs of models using the appropriate perspective, and then placed those photographs inside the 3D environment render. This was an approach that we could manage well with the budget and talents at hand.
Once we had determined that a 360 interaction would add value to our competency assessment, we chose which competencies we wanted to assess at this point in the learner’s journey. These competencies then served as our guide from which we could draft questions and conceptualised the interaction. We then created a mockup of the 360 rooms, which allowed us to plot the objects and interactions that would test the competencies we wanted to test.
Next, we created an initial low-detail version of the 360 room with all the objects in place. This allowed us to plan where the characters would be placed, which in turn helped us to calculate the correct perspective and distance from the camera to use during 2D photography.
We then went into pre-production for photography, running around sourcing the correct props, costumes, talent and furniture. We were lucky enough to secure a green room large enough to take our photos with the correct composition. Once the photos were taken, we edited them and placed the characters into the 360 room into the position we had planned for them, and edited this further to add realistic shadows.
We spent more time finessing the 360 rooms, which involved editing 3D objects to look more like they would in the UK, and adding details they would recognise to make the room feel real. This required quite a bit of iteration and collaboration with the subject-matter experts.
Finally, our 3D scenes were complete and we were ready to insert them into Articulate Storyline.
This part of the process was fairly simple. We inserted our 360 renders into Storyline, and set an appropriate starting view that would allow the learner to orient themselves. We then added hotspot markers on all the important focal areas in the room.
Each hotspot reveals a zoomed-in view of a specific object or area, along with a question about that object. Because we chose to build these 360 environments digitally, we could easily generate these zoomed-in 2D renders by adding a 2D camera into the existing scene.
To answer the questions correctly, learners would need to explore and compare all the objects in the room, just as they would need to do in real life. Therefore, the hotspot navigation was set so they could be visited in any order, and learners could choose to answer a question only when they felt they had gathered enough information to draw their conclusion.
However, we didn’t want the learners to be too overwhelmed by the experience, so while we did keep navigation free, we also added numbers to the hotspots to help learners have somewhere to start. Hotspot 1 is the first object that a learner would usually consult when performing the procedure on the job, so this still mimics reality reliably.
Finally, we made the interaction as accessible as we could. We added descriptive alt text to the objects in the room, hotspot markers, and all informative imagery. We also created a logical focus order for the questions, removing any decorative images and unnecessary shapes. We then added an additional set of instructions for the learners using keyboard navigation, to help them get used to this new type of interaction.
Although we had spent a lot of energy getting to know our learner audience before we kicked off design, we did overestimate how comfortable they were with technology. When user testing our 360 interaction, it became apparent that many of the wonderful folks who assume the role of a care worker are … Well, people-persons, not tech-persons.
We were kicking ourselves a bit, because in retrospect it was clear that an assessment is not the right place for this type of learner to encounter a new technical demand on their abilities. Navigating the room was just a little bit too scary for them, and this made us all feel like it was doing more harm than good. Getting into a technical fuddle is not what you want at the start of a competency assessment; that’s the opposite of an immersive experience.
So, the 360 interaction instead became a tool for virtual instructor-led training. Its purpose is the same as in our competency assessment, but the stakes are lower. Since training is about getting acquainted rather than tested, this context helps learners to assume the posture of the brave explorer more naturally. And, since the instructor is there to help facilitate, the tech-anxiety can be dispelled fairly easily.
And that’s the story of the 360-interaction that we created for OPUS. Remember to check out the training that OPUS offers here. If you want to explore how Who’s your ADDIE? can level up your project, get in touch!