CHARACTERISE – Virtual Human Open Simulation Framework for Cultural Heritage
While 3D representations of cultural heritage sites are becoming more common, the empty streets of these models can often seem sterile and lifeless. The addition of avatars and virtual humans can enrich these static models and improve the experience of the user. In the new CHARACTERISE project experts from Switzerland and the United Kingdom will join forces to create a ‘Scene Population Toolkit. This toolkit will allow to place intelligent, multilingual avatars into virtual scenes.
“Many excellent exemplar of individual atomic technologies have been developed, but still we lack a well understood and generally accepted strategy for putting them together, so that they provide a whole which is bigger than the simple sum of its parts,” professor John Glauert (Univ. East Anglia) explains. “The missing element is an open source framework to glue them together. This would curb the complexity and make the resulting system machinery a consistent and seamless unity, leaving at the same time open handles and hooks for replacements and extensions. Currently, in EPOCH virtual humans are already considered and we propose to extend existing activities by developing a real-time integrated platform for mixed reality cultural heritage populated storytelling realistic simulations and by delivering open source tools and software for direct application in EPOCH showcases. “
The CHARACTERISE project will thus create a Scene Population Toolkit to place intelligent, multilingual avatars into virtual scenes, which have been created using the partners’ avatar creation software and which are powered by speech synthesis. The toolkit will feature intelligent navigation, within a scene as defined by the scene description. Distribution of the crowds throughout the scene will be initialised automatically using a `placement tool’, with the possibility of interactive modifications. Crowds will thereafter behave intelligently, with the corresponding runtime engine avoiding collisions, supporting interactions with the user, steering gestures, and following crowd dynamics simulation. Appropriate LOD descriptions and level blending schemes will be developed in order to let the avatars maximally engage the user. Multi-lingual speech technology will be integrated. File formats are expected to be standardised at all stages of the avatar creation pipeline.
The Swiss partners in the project, MIRALab (Geneva) and VRLab (Lausanne) have already advanced with the implementation of a modern real-time framework. This framework, codenamed VHD++, has been successfully adopted in various EU projects. Hence the main challenge that MIRALab and VRlab will undertake is: elaboration of methodology, guidelines, architectural, design and behavioral patterns leading to the construction of a vertical middleware framework that would support development of the interactive audio-visual real-time systems featuring efficient real-time virtual character simulation especially for 3D Virtual Heritage Simulation.. The goal of the project is to release as open source the basic VHD++ kernel and plug-in technologies to enable 3D virtual character simulations in 3D cultural heritage environments.
The University of Brighton would be responsible for tool evaluation and testing the resulting acceptability of experiences generated using the tools.