Ever since we decided to move away from a bespoke Oculus Rift interface for Datascape and instead concentrate on a WebGL interface a whole bunch of opportunities have opened up. One of the benefits of doing WebGL for VR is that it gives us web access from any device to Datascape visualisations almost for free - even if you don't have a VR headset. But in the last week or so we've realised that we can use the same export feed from Datascape to generate the code to visualise the data in a whole range of different environments. For the moment there are 2 we are focussing on, but I'm sure more will follow.
The first is our own Fieldscapes. A Fieldscapes exercise is defined in XML, so we can use the data export from Datascape to generate the data points in Fieldscapes. Whilst you could visualise the data in an open field, you could also visualise it in something like the "activities space" in Daden Campus. All of sudden you've got something very close to our old "Datascape 0" virtual war-room in Second Life. And since Fieldscapes is multi-user you can bring colleagues into the space to examine and discuss the data, and since Fieldscapes does support Oculus Rift (and Android phones, and soon Cardboard, and maybe iOS) then some or all of the participants can be in VR as well!
The second is AltSpaceVR.
The Digital Marketing Bureau has recently completed an infographic on players in the UK VR space. We're glad to see that Daden are on it - although the colour choices and marker size make it hard to see which category we (or anyone else) are in! For reference we think we're in (by their categories):
- Public Sector
And of course Dataviz!
Graphic is by thedigitalmarketingbureau.com
Custom Animations for Unity using Windows Kinect
by Nash McDonald
For the Daden U Day I decided to investigate how much effort was involved in creating animations for 3D characters using motion capture. What tools were available and finally what results would this process of creating animations yield compared to the traditions way of animating using key frames. For the motion capture I planned to use a Microsoft Kinect v1. I chose this because it was the only motion capture device that was available to me at the time. Ideally I would have preferred an Xbox One Kinect camera as Kinect V1 was first released June 16 2011 which is 5 years ago from the time of this post. I planned to use the Kinect with markerless motion capture. There are other cameras available on the market for markerless mocap but they are specialist equipment which require special rigs.
The morning was spent doing some research on the best way for capturing motion from the kinect camera and turning it into animations usable in Unity with the Mecanim state machine. Many google searches later and countless youtube videos I came to the conclusion that there was only two ways of accomplishing my goal for the day. The first was to directly record motion in Unity using the Microsoft SDK for Unity3D and custom scripts written by the Unity Community. The second was to capture the motion using mocap(motion capture) software then create animation files usable in Unity3D. I chose the latter because i wanted to have the ability to edit the animations before using them in Unity. I found a very good piece of software iClone 6.
iClone 6 is an application used for generating 3d scenes and animations. it can animate objects as well characters. With iClone 6 i was able to install the iClone Kinect mocap plugin. I imported one of the avatars from Fieldscapes into iClone. Afterwhich I was now able to use the Kinect Mocap Plugin to record animations using in iClone. See image below
I recorded a simple touch animation to simulate touching an object in Fieldscapes. The animation recorded would have ideally needed further refinement as the recording from the Kinect was not very accurate. I was able to export the character as an FBX file containing the animation data which can be imported into Unity 3D. The whole process from start to finish was mostly painless. I suspect if the animation was more complicated the result of the mocap would have needed an animator to spend a good few hours ironing out the rough edges. The process and equipment I used were very good to get a rough start and speed up the whole animation process but could never replace the traditional way of animating each bone in the character's body using key frames.
David has just presented another Brighttalk Webinar. This one is more of a personal view about the future development of Virtual Reality, and uses the VR/Virtual Worlds scenario presented by Caprica (the excellent Battlestar Galactica prequel) as a key reference point.
You can also watch David's other recent Brighttalk webinars on VR for Data Visualisation and for eLearning:
We've released v2.0.2 of Datascape2. This is primarily a maintenance release, fixing bugs and making some improvements to usability. Key fixes are:
- If no mappings match then show new mapping screen, not empty match screen
- Fixed data import crashes with field name titled "Group" (reserved word)
- Fixed "sequence contains no matching elements" error Mapping Templates 26/10/2016
- Removed need to use ToLower() when doing some lookups (esp after a Round or other SQL function)
- The Global Spherical Mapping template was swapping Longitude and Radius values each time the data was re-plotted.
Full release notes at: https://dadenwiki.atlassian.net/wiki/display/DAT/R...
Datascape can be downloaded without registration from: http://www.daden.co.uk/conc/datascape/datascape-do...
We're running a short survey to help inform the final stages of the development of Fieldscapes, please find 5 minutes to fill the survey out here:
In addition we have also just launched our first proper "Introduction to Fieldscapes" video. YOu can watch it here or on YouTube.
We found some nice data on the RCSB Protein Data Bank which gives the x/y/z angstrom unit location of every atom in a huge range of proteins. The data was in a fixed width format and didn't take long to convert to CSV and add in some of the meta data contained at the top of the file. The visualisation shows both the whole dataset, about 65,000 points covering 30 different co-located models, and then another mapping shows just a single model. Scrubbing is used to filter through the different models, and then on the single model to filter through the different chemical elements. Shape is used consistently to also show element, and colour to either show strand or element.
This sort of protein visualisation was always something we thought that Datascape wouldn't be particularly brilliant at, and there are several dedicated apps to do it, but we were impressed at how good the results appear to look.
Today we released Version 2.0.1 of Datascape. This is primarily a maintenance release and includes:
- Several small usability improvements
- Several minor bug fixes
- The implementation of a new key based licensing system
- Ability to show multiple hover labels.
We've also now made a simple single click download link - no filling out any forms - so no excuse not to give it a try!
From here the route map is:
- v2.0.2 in 3-4 weeks to allow for workspace import and export to encourage greater sharing of visualisations
- v2.1 in 1-2 months which will feature export to WebGL of visualisations, to not only share on the web but also so they can be viewed in VR mode in Google Cardboard.
David's two Brighttalk webinars are now available for you to view for free and at your leisure:
- VR in Education: Moving the Classroom to Mars (and other field trips)!
- Using Virtual and Augmented Reality as Data Visualisation Environments
Enjoy, and we'd welcome any feedback. David will probably be doing another Brighttalk event looking more broadly at the use of 3D in DataViz in November.
Whilst prepping the slides for last week's Brighttalk 3D Dataviz Webinar (watch it now) I started to put together a taxonomy of 3D data visualisation.
The starting point is a 3D plot - we are plotting data against 3 axes, not 2.
There is then a big divide between an allocentric and an egocentric way of viewing the data. Allocentric means that your reference point is not you, it's something else. In egocentric you are the reference point. In practice this means that in an allocentric plot if you move the viewpoint it feels like its the data moving, not you; and in an egocentric plot if the viewpoint moves it feels like you're moving and the data is staying still. Since this latter is how the physical world works it's what our eyes and brains are used to, so we feel more at home, and we can maintain context and orientation as we move through the data. Tests we did a few years ago with Aston University compared allocentric and egocentric ways of exploring 3D data, and showed that generally performance was better for the egocentric view.
Within the allocentric branch the next divide is whether the plot is static (in which case I suppose you could argue it's neither allocentric or egocentric) - as you might get in say Excel, or whether you can rotate and zoom the plot (as in something like Matlab). Are there any further sub-divisions?
On the egocentric branch we think the divide between viewing the data on a 2D screen (as in "3D" computer games), and viewing it through a VR headset in "real" 3D is far more a case of how you view the data rather than in any fundamental change in how it is being plotted. To us the big benefit is going egocentric not allocentric, not going from 2D screen to 3D headset. In fact our experiences with Oculus DK1 and DK2 suggest that the 3D headset is actually a worse way of viewing data in many (most?) cases. Luckily Datascape will be agnostic between 2D and 3D displays once we release V2.1 - you'll be able to do both. 3D wall displays using head-tracking glasses are probably another example of a different view rather than a different method of plotting. But again are there other more useful/detailed distinctions that can be made?