360 Camera Gimbals: Where Are They?

Since the Movi rig was announced in 2013, gimbals have become prevalent in 2D filmmaking. They enable smooth motion on multiple axis using the latest in brushless motor technology. In 360, a gimbal would go great lengths to making VR content smoother and more comfortable for the end user. There have been a number of examples of demo products, but only one has made it to market so far. The reason 360 gimbals are not seen commonly yet is due to both technical and economical factors. A gimbal rig works in 2D by balancing and counteracting a camera sat directly in the centre of gravity and works to keep it there. This means the gimbal rig surrounds the 2D camera in every direction except the one the lens faces. In 360, the camera sees everything, so a normal rig cannot be mounted around it. Counter-weights need to be included in the design to enable the vertical centre of gravity to be raised to where a camera can sit on top of a gimbal. Economically speaking, the market for 360 cameras is relatively small in comparison to 2D cameras. For those looking to move 360 cameras often and professionally, the market is currently exceptionally small. Hence we are only seeing very small start-ups attempting to make gimbals at this time. Historically, many people have thus far been using gyro stabilizers. These are based on rotating balls in a small housing, which revolve up to 10,000rpm. They thereby create a spinning centre of gravity which counteracts movements at the other end of a pole where the camera is mounted. Kenyon Gyros... read more

NAB 2017 Round-up: Other 360 Video Kit

 NAB wasn’t just about new 3D stereo cameras. There were a range of other cameras and production kit worth noting for their release in the near future. Facebook Suround360 The most impressive announcement was Facebook’s latest version of their Surround360 program. Pictured above, they have teamed with a company called Otoy to develop a volumetric-based capture system (a pseudo lightfield technology). They record images from multiple angles and then build that depth information into a virtual scene, allowing a viewer to literally move around (small head movements) in what is called six-degrees-of-freedom (6DOF). The prototype cameras are in two models X6 and X24 (6sensors and 24sensors respectively). The bigger camera covers more data with 4 overlaps of information to the X6’s 3 overlaps. This means more scale, higher quality images and data, and greater movement. The key issue for volumetric recording is dealing with the amount of data, but we anticipate seeing Facebook collaborating with an established camera manufacturer on this development, with a view to a potentially releasable product at NAB 2018. Z CAM S1 Pro Albeit announced prior to NAB by a few months, this was the first time we saw non-prototype cameras which are nearing release. Full micro 4/3 sensors mean higher dynamic range and more accurate colour science with better low light sensitivity. Based on Sony 2880 sensors, this is likely to be the standard Pro mono rig for a while, although heavily hindered by a maximum 30fps and also larger parallax than its smaller version the S1. (Note the lenses in this picture are not the ones on the final version). Panasonic 360 4K... read more

NAB 2017 Round-up: New 3D 360 Cameras

NAB show 2017 is over, and with it came a raft of new 360 cameras as expected. This year’s focus was primarily on advancing stereo 3D 360 for VR virtual reality. We’re doing a round-up of all the latest 3D 360 cameras and what our impressions are. Until now, achieving stereo 3D 360 has been extremely challenging, costly and required a specific skillset not many are capable of. Felix & Paul are probably the only company achieving consistently good results, with extensive budgets and post schedules to match. The key technology to change this is the advancement of optical flow stitching. Combined with higher resolution cylindrical designs with closer parallax and we are going to see many more examples of high qaulity stereo in the coming months. Professional stereo 3D 360 cameras for Virtual Reality Professional 360 3D stereo cameras are coming soon. The Z CAM V1 Pro, Yi Halo, KanDao Obsidian all look like potentially great stereo 360 capable camera systems. Z CAM V1 PRO The Z CAM V1 Pro is the premier 3D capable camera we have seen. High quality Sony Micro 4/3 sensors in the round, and capable of 60fps 4K stereo. Simply this is next standard pro 360 camera, with a high cost to match. The stitching optical flow software is a custom camera-specific development based on Facebook’s surround 360 algorithm. Yi Halo The Yi Halo is a collaboration between Yi (Xiaomi) and Google. After falling out with GoPro over the Oddessey, Google partnered up with Yi to enhance the systems which take advantage of their Jump optical flow algorithm. It is more advanced than... read more

Optical Flow Stitching – The Future Of 360 Video Post

Optical flow stitching is on the brink of becoming commonplace within 360 video professionals. It is the process of analysing pixel movement across a number of frames and comparing that movement in different images.   Traditional image stitching compares pixels in a single frame image for relative matching accuracy and then applies this to a whole video. Essentially one frame dictates the stitch for everything (unless edited into sections of stitch with a warp transition between). This is a granular methodology matching pixels like a jigsaw across a line. Optical flow is the process of analysing the objects within the flow of a series of frames and noting how they match and move across a seam. A human can instantly tell if a vertical line such as a building or door frame is broken, but a standard stitch will stitch for a parallax distance, often infinity, so won’t take into account incorrect geometry. The optical flow software will match the initial frame of the stitch, and then analyze the next series of frames for movement or better match and then apply a skewed avaerge of the frame process to the first frame. It is a complicated algorithm driven by extensive mathematics. Facebook released a tweak flow algorithm open sourced in 2016, and we are now seeing tweaked versions of it applied by various camera manufacturing companies to their products. This algorithm flows top and bottom into side cameras, and then flows pairs of cameras for objects at different distances. Optical flow is a processor intensive process. On new FB algorithm based software called Wondertstich software, an optimised windows machine... read more

S1 Pro Camera: First Impressions

The S1 Pro is a new higher grade version of the S1 camera.  Similar to the Z4XL, the S1 Pro features 4x micro four thirds sensors. On this particular model, they are Sony sensors which have a larger square pixel size over the Panasonic sensors. The camera also features a smaller parallax and can be combined with optical flow software to enable high quality optically near perfect image stitching. Optical flow algorithms measure pixel movement around several frames for every frame rendered, so while quite accurate, do significantly add render time in the post workflow. In this Mountain Dew advert, we see excellent movement with an RC VR vehicle and also attachment to a helmet during the skydive segment. The transition between the two is very seamless. We can see good minimum distance in the bar as they production crew took advantage of the smaller parallax. The dynamic range of the cameras is quite extensive also. A useful side-note is that the lenses employed on the S1 Pro feature aperture control, making them the first lenses for professional VR 360 video filmmaking in our opinion.... read more

Spatial Audio Workflow for 360 Video

Spatial audio is a fantastic way to increase immersion for 360 video. By enveloping the viewer with binaurally rendered sounds that change depending on their head-movements, the creative opportunities for immersive story-telling expand still further. But creating a spatial mix can be technically challenging at times, especially without much detailed information and no one-fits-all type workflow solutions going around. Luckily there are some tools available that make life a bit easier. One of these is the Spatial Workstation by Facebook (previously developed by Two Big Ears). These set of plug ins work in most major DAWs such as Pro Tools HD, Nuendo and Reaper, and allow you to position and rotate sounds around the listener’s head while the encoder allows you to mux your final audio mix with your 360 video. With the Spatial Workstation’s latest release, PC users are now also able to take advantage of these plug ins and are able to export 1st order ambisonics for most major VR platforms, including Facebook, YouTube and Samsung Gear VR. One  of the most challenging aspects of working with spatial audio is making your mix work consistently across multiple platforms, as each platform not only has its own specific delivery specifications, but also decodes ambisonic audio slightly differently, resulting in often inconsistent sounding mixes. The engineer will need to take the additional steps to test and fine tune the mix for each platform, which can be time consuming. The results however are worth it and certainly add an extra dimension to any 360 production by giving the viewer the added sensation of not just being able to ‘look’ around... read more

360 Video Output Render Resolutions – Early 2017

Every production is quite different in scope, budget and schedule. However we do know that we are going to end up with a video of resolution and bitrate that is likely to be consistent. We’re discussing the best outputs for platforms as it stands in this post.   We refine our workflow job by job, seeking for the best quality versus speed equation to deliver suitable image for each job. There are three elements to this – resolution, bitrate and codec. File size is also important depending on logistics. Resolution Regarding resolution, we know that videos for 360 need to be 4K minimum. As 360 players wrap an equirectangular video into a sphere, the viewer is only going to see a small section of that video. At 4K UHD, we end up viewing a window of about 1.3K resolution. Having a master resolution of 1920×1080 HD results in about 370-pixels of data, which is two thirds of SD. Far too soft to be comfortable or watchable in our opinion. 8K resolution videos would be ideal, rendering down to around 2K window which is equivalent of the HD images general audiences have gotten used to. However 8K is not viable either as a workflow on any reasonably priced computer system, or more importantly viewable on any mass-market VR device. When mobile phones can play 8K video smoothly, then this format will be worth attaining, however right now 8K is only available on youtube and very few have the capability of streaming and watching such a high resolution. 4K Output At the moment, we often work to one of the following... read more

360 Live Stream Virtual Reality Broadcast Solution

Live streaming in 360 is becoming more and more popular. The concept has a big future for delivering live entertainment to remote customers in VR. Right now there are issues with delivering 4K, let alone 8K images required to make it watchable. Generally 1080 HD is the limit for widespread streaming which is a bit soft for comfortable viewing. But it still has its applications for small audiences who can handle higher data rates. We’ve been waiting for some time to have a package of 360 hardware and software which is rock solid reliable for broadcasting which leaves no room for technical error. With a combination of two pieces of equipment, we finally have a solution which fits those needs. S1 Camera & Teradek Sphere The S1 camera is a reliable high quality 360 camera for the mid-range professional market. It records to a range of useful resolutions and frame rates, but also spits out four 1080 HD videos via mini-HDMI inputs on its base. From the S1 HDMIs, the videos feed in to a 4-port Teradek Sphere unit. The Sphere is primarily used for wireless video on-set monitoring, however it has the capability to send 360 images directly onto an RTMP server or youtube. The Sphere wireslessly sends the 4-videos encoded to an iPad Pro. The iPad pro with Teradek software is capable of a live on-the-fly stitch which can be customised to fit the build and lenses of any 360 vertically orientated camera. The iPad feed can actually put out 4K live stitched, but usually a stream would take a 1080 HD at a reasonable bitrate. Technically... read more

Z S1 360 Camera Review

We have had the S1 in action of productions for some time now. We felt it was time to report on its suitability for 360 video production. The S1 is a mid-professional level 360 camera system which integrates 4x 4K sensors and 4x 220° lenses into one small, slim unibody camera system. It records a in a variety of recording formats, but the most common are 4K30fps and 2.7k60fps. These result in 6K and 4K maximum masters respectively after stitch. Image quality is pretty good and slightly better than the GoPro omni system, resulting in slightly sharper images due to better lenses. The dynamic range is similar as is the noise sensitivity. The cameras are aligned in a much better geometry than the GoPro Omni’s diamond configuration. Vertical stitch lines are easier to manage with objects crossing. Importantly the camera sensors are pixel synced and usefully record to actual full size SD cards. No more fiddling around and losing MicroSD cards. The overlap is reasonable enough to give flexibility when needing to move the stitch line and while the body gets very hot (too hot to touch at times), the camera itself never overheats and shuts down. The small parallax distance of the cameras reduces usable minimum distance of objects to around 0.75m (in many cases 0.5m). This means it can film in very small spaces such as inside cars, or have objects cross close to camera reasonably effectively. The four sensor configuration does have one major downside and that is that the nadir and particularly zenith coverage is poor. Ceilings often need additional post GFX to clean up... read more

Acting For 360 VR

While our posts often focus on the technical side of VR filmmaking, or the crew roles, we have not yet looked at one of the most important elements in 360 videos which is the acting. In this post we are going to consider the role of the actor and how they fit into our new medium. An actor for 360 video needs to consider what it is the user is going to see. Basically unless they are hidden or off-set, they will be in shot, and this means everything they do they need to think about at every moment. They will be ‘always on’ and this has repercussions for their emotional performance. The actor needs to think about not just their close up and the emotion in their face they are expressing which is common for mono 2D filming. They also should be thinking about their physical performance, whether it be moving or just standing still but the limbs need to be making natural consistent movements both suitable for the character and feeling natural for a human being. The movements would be closer to stage/theatre acting, with the actor aware of the audience eyes that are always on them. The problem here then, is that we have a mix of both worlds. An actor needs to deliver lengthy physical performances with the emotional intensity of a film close-up throughout the course of a whole take. And this needs to be from every actor involved in the scene, for every take. A director needs to work with an actor throughout the course of a whole take, ensuring natural movements which aren’t disjointed with... read more