Sneak Preview – We Give You More Fantastic VSA Configurations

Well yes, it’s time for some pre middle ages holidays in Sweden. While we sing silly songs and having a blast, you get to read our final sneak preview of Vidispine 4.6 before release (also having a blast). This time you will get more goodies around VSA in a Vidispine clustered configuration.

With vidispine-server it is easier than ever to set up a high-availability configuration with Vidispine. Install Solr with ZooKeeper, use a stand-alone ActiveMQ, and any redundant PostgreSQL/MySQL solution, and you are ready to launch any number of vidispine-server instances. However, the Vidispine Server Agent (VSA) could only connect to one Vidispine instance. This caused things to break, as any Vidispine instance might have to connect to VSA to execute jobs. Not so any longer with VS 4.6.

Let’s start with some diagrams. The normal setup for VSA in 4.4 and 4.5 is by using the operating system’s SSH service, see Figure 1.

46 sneak preview normal VSA setup

As a sidenote, in version 4.6, we also introduces the possibility of not using SSH at all, which can be used if both VS and VSA are in a VPC/VPN, see Figure 2, and read more about it in a previous VSA sneak preview.

46 Sneak Preview VSA without SSH tunnel v2

The SSH configuration in 4.4/4.5 depends on the operating system, and it requires a special user to be created. In VS 4.6, this is no longer necessary. Instead, vidispine-server bundles its own SSH server, which listens to its own port and does not require any other Linux user account. In addition, the bundled SSH server is really locked down, it does not even have the shell subsystem enabled. See Figure 3. The new model is also great if you are using Docker or other container services, where SSH provided by that operating system might be more than a one-liner, or where you do not want to enable SSH. Just map any port to port 8183 in the container, and enable the VSA node.

46 Sneak Preview VSA With Multiple VS

In order to enable the VSA node, you need to do three things:

  1. Enable the VSA port in vidispine-server. This is done by adding these two lines to your server.yaml file, and restarting vidispine-server.
  2. Add the VSA node to vidispine-server. This is done using the vidispine-admin tool.
  3. Add the output from the tool to the VSA’s /etc/vidispine/agent.conf, and restart VSA.

Let me walk you though the input and output of the text block above. First some standard stuff.

I assume you are running vidispine-admin on the same machine as vidispine-server, so just hit enter here:

Now, which address should VSA use to connect to vidispine-server?

If you are using docker, firewall port forwarding, or anything that means that VSA should not connect to the same port number as was specified in your server.yaml, provide the external port number here. Otherwise hit enter.

I really suggest that you give the VSA node a name. You will find more, new, reasons below.

We let the system assign the UUID:

Now we could be done. But we are using a clustered vidispine-server, and VSA needs to be able to connect to both. So we add the other one as well:

That’s it. What will happen now is that vidispine-server will generate a key pair and return the private key in the text output. The public key is stored in Vidispine’s database.

A final goodie. VSA URIs (starting with vxa://) are not very human-readable as they contain the UUID of the VSA node. With 4.6, you can use the name of the node as well. Like  /API/vxa/vsanode_kigali/ . When URIs are returned from Vidispine, you can use methodMetadata to specify that you want the URIs returned to contain VSA names instead of UUIDs, e.g., /API/item?content=shape&methodMetadata=vsauri=NAME.

Note! Specifying VSA’s by name, and return VSA URIs with name, only works if the name is unique. If two or more VSAs have the same name, you will get a 404 back.