Image Mode
Before starting with implementing Image mode in your web application make sure you have a comprehensive understanding of how state is handled in OmniStream.
Simple Example
A basic Image mode integration is almost exactly the same as a basic Video mode integration, the only change that's needed is to set streamingMode to image rather than video. We also need to set initialAppStateData, but if you're implementing Image mode, then this should already be set from your Video mode integration.
Replace "customer" and "renderService" with the details from your account.
<!-- We need libZL -->
<script src="https://libzl.zlthunder.net/libzl/versions/latest/libzl.js"></script>
<!-- We need a container to attach the stream to -->
<div id="streamContainer"></div>
<!-- We need a bit of javascript to sort the connection out -->
<script>
const libzl = new LibZL();
const cloudstreamSettings = {
directConnection: false,
cloudConnectionParameters: {
customer: "omnistream",
renderService: "yourservicename",
},
streamingMode: "image",
parent: "streamContainer",
initialAppStateData: {
/*Put your initial state in here*/
},
};
libzl.cloudstream("cloudstreamExample").then(function (api) {
//Adding to the global namespace
cloudstream = window.cloudstream = api;
//Adding event listeners to what we're interested in
cloudstream.addEventListener("error", function (error) {
console.log("OmniStream had an error: ", error);
});
cloudstream.addEventListener("streamReady", function () {
console.log("The stream is ready!");
});
//Connecting to the stream
// - options contains parent DOM element name to attach to
cloudstream.connect(cloudstreamSettings);
});
</script>
It is also possible to switch back and forth between Video and Image mode after a connection has been established. This can be done by calling setStreamingMode with either image or video, for details see the API reference.
If your interactions are already driven buy the webpage and you're already handling state properly in your both your application and web code then most of this should carry over to Image mode.
The main point of difference between Video and Image modes are how cameras are handled. In Video mode, it;s usually to give fully interactive cameras the user has full control of though direct interaction with the application stream viewport. In Image mode the application viewport is an image, so the same level of interactivity isn't possible, but both static and interactive cameras are possible with Image mode.
Static Cameras
Static cameras are easy to represent in Image mode. The application should be set up to handle cameras through state and then from your webpage you can make a request like cloudstream.sendJsonData({ "camera": "aNiceCamera" }) which the application will receive, interpret the JSON and perform a camera switch.
Animated Cameras
Animated cameras take a little more thought, but can still be represented without too much complexity. The guide to setting up cameras in your application covers the different approaches in setting up the camera(s), here we'll assume that 8 cameras have been set up in an orbit sequence around an object. The principle is that when the mouse is clicked and moves a certain distance left or right, a request will be made to switch camera to the next/previous in the orbit sequence.
<script>
const xMoveDelta = 100; //When the mouse moves 100 pixels in the x-axis we'll switch to the next/previous camera
const orbitCam = ["1", "2", "3", "4", "5", "6", "7", "8"]; //These are the camera "names" that will be send to rendering application to switch to
let orbitCamIndex = 0;
let lastMouseX = 0;
let mousePressed = false;
//If the mouse has moved far enough left or right while left mouse is down, move to the next camera in the list - will give us a basic animated orbit camera in Image mode
const mouseMove = (mouseEvent) => {
if (mousePressed) {
if (mouseEvent.clientX - lastMouseX >= xMoveDelta) {
//Mouse moved right
orbitCamIndex = (orbitCamIndex + 1) % orbitCam.length; //Wrap back to the beginning so we'll loop
cloudstream.sendJsonData({ camera: orbitCam[orbitCamIndex] });
lastMouseX = mouseEvent.clientX;
} else if (mouseEvent.clientX - lastMouseX <= -xMoveDelta) {
//Mouse moved left
orbitCamIndex =
orbitCamIndex - 1 < 0 ? orbitCam.length : orbitCamIndex - 1; //Wrap round to the end so we'll loop
cloudstream.sendJsonData({ camera: orbitCam[orbitCamIndex] });
lastMouseX = mouseEvent.clientX;
}
}
};
const mouseDown = (mouseEvent) => {
mousePressed = true;
lastMouseX = mouseEvent.clientX;
};
const mouseUp = (mouseEvent) => {
mousePressed = false;
};
let streamContainerElement = document.querySelector("#streamContainer");
streamContainerElement.addEventListener("mousedown", mouseDown, false);
streamContainerElement.addEventListener("mouseup", mouseUp, false);
streamContainerElement.addEventListener("mousemove", mouseMove, false);
</script>
For simplicity the above example shows camera as the only state variable. In a real example, when sending as command to change the camera in an orbit the rest of the state would also need to be included.
While this is a simple example it's easy to imagine how it could be adapted by adding are cameras, switching from raw pixel distance to a % of screen size, non-looping animated cameras and so on.