Screenshot 2025 02 03 at 6 29 09 PM

The Lens Suite : Part 3


Ideas and tools for lens work

Virtual realities can affect live entertainment in positive ways, allowing us to better position and integrate such environments, convey narratives, and shape collective emotions more deliberately and more relevantly to today's audiences. In contrast to personal- and crowd-lenses, the special immersion originates from audiovisual technologies and tangible surroundings augmenting one another; something we called group-lens. The particular allure of these spaces is their unique design, and this customisation makes the involved workflows and creative decisions unique too.

At this point I would like to move from the conceptual and broader design considerations to specific tools, demanded by and aiding in creating group-lenses. I do this because in my opinion it is important to be specific about the type of lens when designing for it, because there is a two-fold aspect to interactivity in lens-spaces. 

Previously, we saw that above and beyond the synthesised world, the key immersing factors are personal degree of freedom and interactivity in personal lenses, and exaggerated shared emotion in large lens-spaces. Group-lenses exist in between these two extremes. Their audiences are private groups, free to choose their view point, to interact with each other and the environment. Nevertheless, group- and crowd-lenses have in common that their audiences typically encounter a preset narrative and timeline. 

This is different for the creators. Any kind of lens is a technically complex system that requires many departments to achieve a unified outcome. Despite, or maybe because of this complexity, I would argue that creatives need ways to gain immediate feedback and address changes, to achieve impact and immersion.

When he says “We built all these tools, now I need to work the room!”  Edward Hodge, Vice President of Creative, Story & Innovation at BRC Imagination Arts, refers to a moment when seemingly all elements for a show are commissioned and at his disposal, but not yet playing in concert or maybe missing an unknown ingredient. He needs to let go of the previsualisation and feel the physical space to give it the magic finish.

This exemplifies workflow challenges and reveals the difference between personal- and other types of lenses. Headsets need software suites for real-time first person spatial computing, something manufacturers are providing. Group- and crowd-lenses are customised live entertainment systems, amalgamating different departments (sound, light, automation etc.), dictated by classical sequencing and cueing. As a result their fundamental technologies are only partially real-time. Nonetheless, they need to offer tools allowing immediate changes, at least during the design phase. 

At this point I would like to examine three methods that I now consider in service to group-lenses, which we have worked on, supporting the creative process.  All are subjective to our workflow services, and at the same time tasks that re-emerged in our practice recently.

Of course no problem is ever quite alike, so maybe the benefit is in the bigger picture rather than a specific outcome. Therefore, I will describe the playing field or brief, explain our response, and review any key elements of the implementation process or findings, instead of tutorials.

Case I: Atlas maps for efficient exchange between realtime engines and media servers

In fixed installations we have seen that procurement often happens long in advance of any onsite installation and programming. As a result, crucial aspects of the hardware, such as GPUs or networking, might be one or two generations old or not immediately compatible with the required workflows. For these reasons we like to make tools that allow performance headroom. 

When we were asked to work on the lobby of the Johnnie Walker Princes Street experience, we had to integrate content for a large video canvas, with custom LED backlights driven by a lighting desk, and customisations for buy-out events.

The media server installed was capable of the playback but unfortunately it was not agnostic to the 3d space. Therefore the lighting previsualisation, rendering and buy-out features were built in a realtime engine - Notch. We bundled and rendered all elements into one 2D atlas; a pixel perfect section representing each display within one bigger texture.  The result, being similar to the sprite sheet for older games, is then brought into the media server and distributed to the outputs as required, a feature referred to as feed-mapping.

3D Context - Atlas Packed RT-Engine Output - Media Server - Feed Output to Hardware
3D Context - Atlas Packed RT-Engine Output - Media Server - Feed Output to Hardware

Circumventing the restrictions posed by the hardware, the performance benefits are significant as well. This is because the atlas map can be compacted, leaving little unutilised texture space and running as a single instead of many processes.

Since then we have begun using the atlas approach in more installations that use real time effects, even when the media server supports more complex mappings. Despite longer setup times, atlasing expands capabilities and the performance savings help to bridge the scope gaps, which in turn gives more resources to directors in the creative commissioning phase of the environment. 

Case II: Procedural 3d tools: almost instant addressing updates for complex display designs without impacting content delivery.

Recently we contributed to an experience produced Georgia-USA, which can not be disclosed in detail, therefore any descriptions and illustrations shown here are exemplifying the process and do not resemble the actual work. The walk-through show consists of a number of rooms nested inside a larger building. Entrance and exit to the experience are fully integrated into the surrounding architecture and interior design. Our brief was to deliver a comprehensive workflow with templates for all video driven elements across the rooms, build media server setups and create real time effects for anything not exclusively run with rendered contents. 

The most challenging part of this work was posed by a number of bands, weaving through the main attraction. Their organic flow, ever changing curvature, and varying width make them sculptures in their own right, albeit completing the immersion in the room by seamlessly connecting the narrative, played out on many different fixtures and displays.

The design constraints demanded the bands were hand fitted with LED pixel strings, meandering across each surface using an optimal orientation for even visual distribution and technical addressability. While the LED schematics and DMX addressing were being worked on, we had a proxy model of the bands’ final shapes, but lacking the position of each pixel within the room, which would be crucial to making engulfing real time effects. Our idea was to create and maintain the content templates from the band’s preview shapes and insert the correct translation to each pixel later, allowing engineering and creative development to happen in parallel. 

Curve Design - Surface Construction  - 2D String + Pixel Layout, Compact UVs - 3D Remapping
Curve Design - Surface Construction - 2D String + Pixel Layout, Compact UVs - 3D Remapping

This simple translation soon turned out to be a big task, with smaller bands already containing over 3500 LEDs. The shape information allowed us to make the proxies swiftly, but the pixel positioning had many limitations. For example; finished sections of the bands had to be positioned and combined onsite, their seams covered with unknown string lengths at the last minute. So for safety we gave the proxy workflow an increased resolution and then began making new 3D tools.

The complexities involved dictated a fast and interactive way to update the LED positions and the pixel/DMX addresses. This led to building all 3D meshes procedurally, but instead of solving the band problem outright, we broke the process into smaller chunks, and tested them on more basic, linear elements of the installation, such as shelves and frames at first. Our procedure was functional only a few weeks before going onsite, and we continued improving it even while working in the room, but in the end it took only minutes from an engineering change to update over 85000 addresses.

The overall process led to some additional observations: Building one tool or process to instantly fit all situations is close to impossible and it does not serve changing parameters. This is why we embraced stopgaps and a near real-time turnaround, which resulted in a suite of generalised gizmos instead of a single one. What is true for software also resounds with making complex displays. We really had to fraternise with interior designers, model makers  and engineers, supporting each other to achieve the intended results. Organically shaped, custom LED screens will not only be demanded in lens spaces, so their design, workflow and management is critical, and commissioning as well as running them needs improvement.

Case III:  Narrative based augmentation vs abstract virtual fixtures

I mentioned rendered lights at the Abba Voyage experience earlier. Let me explain some observations made there and then examine virtual lights further.

The incredible immersion at the Abba show depends on tricking our depth perception, mistaking the CG-band members for real people in front of a pitch black background. Since the band is displayed on an LED wall, light spill on stage and from the house is carefully managed. To avoid this display feeling too flat, the designers added a rendered video of a truss full of moving lights at the very top. These lights work in full synchronisation with the physical lights across the ceiling. By recreating the effects of the virtual light sources to the virtual band the rendered space completely merges with the real room. While Abba Voyage uses the video of the rendered fixture, virtual lights can also exist without featuring as content, but in displaying the illumination resulting from an imaginary source. 

For the works described in study II, we put a set of virtual lights, placed in a virtual twin of the room, affecting the band screens, and output to their surface. This way the director can treat an emissive surface as though it is illuminated like any other object hit by similar physical lights. The effect can be unifying as well as contrasting, and exposes new behaviours, such as light without shadows. 

Our virtual spotlights are the top layer of a suite of further real-time effects, expressive, customisable, and tailored to the band geometries. These include particle effects and wipes along the surfaces, tightly defined to work within a clear structure of variables, making the bands into fixtures on their own. 

Virtual lights and effects, both constitute devices ideally controlled by a lighting desk. On one hand this is because of their abstract nature, on the other it is for the great amount of variables required, handled best on control surfaces. Each spotlight used about a dozen channels, while each tailored effect surpassed fifty. Eventually the virtual devices in the main room require a little over 700 channels. 

Mandated by the lighting department means any documentation has to cross from the effect engine to the control desk, while further design directives are given from the creative team to the LD. The many channels need names resounding with the lighting department's jargon, and variables must be normalised to value ranges on the desk, which they get programmed on.

Study3 abstraction varibales

In other situations the generative art is directly informed by the narrative, and therefore more literal. The Legend of Luna’s real-time effects are immediately based on the traditional animation, to ensure that the augmentation perfectly matches the overall style. Here the directions are predominantly in the hands of the creative director and content team, who need to manage a much fewer variables compared to virtual lighting. The channels for a stylised blizzard or a bunch of butterflies are much more descriptive than gobo settings for example. 

As we have seen, virtual lights and abstract effects lead to real-time tools, enriching group lens entertainment. The engines can either provide the primary screen content or describe an augmentation layer atop of other media, while the essential consideration is, either which department it is subjected to, or if the visual requirements are expressive and multifunctional, or underpinned by the narrative and closely adhering to established styles. This decision has a direct impact on the amount of channels that will be needed and changes development times. In our experience abstract tools can be made faster, while plot driven ones contain more of the animation’s features that take longer to make. At this point I also think that our tools, specifically the real-time effects, have repercussions on the creatives using them. It is easy to overlook that the developer of a tool is not necessarily its user, who needs to be included in the generation process.

Credits

Thanks to:

Daniela Hornskov Sun

Anthony "Bez" Bezencon

Ollie Newland

and many more...

Get in touch

We love working with clients and partners to realise ambitious, creative and memorable experiences.

Phone number
US Flag +1 310 775 3061
UK Flag +44 (0)7527 289892‬
Email
info@dandelion-burdock.com