General SDK Information

ARTVHub

General SDK information 1.PNG This is a place where the most useful options are exposed to the user. You can configure all the basics and subscribe to a number of events that can greatly expand the creative potential of using ARTV SDK, all in one place. You can access this Hub by selecting ARTV Engine object on the scene hierarchy.

Camera

AR Camera

Use this to set the camera that will be used for rendering AR content. If this field is not provided by the user, ARTV will use Camera.main by default.

Content instantiation

Video recognition start type

There are two options available on when ARTV should start video recognition routine in relation to camera calibration process:

  • After Initialization Video recognition starts as soon, as the system initializes. Content performance may appear shaky until background camera calibration process is finished.

  • After Calibration Finished Video recognition will not start until camera calibration is finished.

Instantiate content If checked, the system will spawn user content as defined in ContentPrefabsMap file mapping upon the recognition of the particular video. Note, that you can have content available on the scene at build time, thus there might be no need to instantiate any content as it is already there.

Load Content Async If content should get spawned in asynchronous manner.

Content Parent This is the root ‘folder’ for your content. Use this game object to hold all of your synchronizable AR content on the scene. This is also a place, where content will get spawned if Instantiate Content option is checked.

Content Prefabs Map This file is used to map user content with a particular video, so you can have several experiences for different videos.

Prefab Path In Resources You should keep your finalized experience content as a prefab somewhere within Resources folder. This option tells ARTV where to look for a particular content to instantiate on the scene.

Keys Source Here you reference a DescriptorsList file which is configured via LocalRecognitionFilesInfo file from available descriptors. This will be covered in more detail in “LocalRecognitionFilesInfo file and adding new Descriptors” chapter down below.

Map You can bind available descriptor names to a specific prefab names here. This will be used to instantiate user content according to the particular video being recognized.

Events

VideoRecognized(string) This event will be fired when the video gets recognized. Video name is provided as a string in event’s payload.

Scan Floor Started/Finished Invoked when floor recognition starts and ends respectively. Can be used, for example, to present final users with an UI showing floor recognition process.

Background Calibration Started/Finished Invoked when the calibration process starts and ends.

LocalRecognitionFilesInfo file and adding new Descriptors

You should place your freshly obtained descriptor files in StreamingAssets/Descriptors folder.

After that, locate and select LocalRecognitionFilesInfo file in Resources folder.

In inspector, make sure that Descriptors Names Output field is referencing Descriptors List file and it is the same file that is referenced in ContentPrefabsMap file’s Keys Source field (it is pre set by default, so you should not need to change it out of the box).

Having new descriptors placed in an appropriate location, hit Update Paths button in LocalRecognitionFilesInfo’s inspector to make your new descriptors available for mapping in ContentPrefabsMap.

Now you can assign prefab names to an appropriate descriptor names in ContentPrefabsMap. As long as this file is referenced in an ARTVHub’s respective field, ARTV will use its mapping to instantiate user content based on particular video being recognized.

Content size and position

By default user content is scaled and positioned relative to the TV screen as it appears on the image from the device camera. You can add TVScreenReference object to the scene to get a guidance for arranging your experience in space.

  1. Go to Project tab in the Editor and type ‘TVScreenReference’.
  2. Drag and drop TVScreenReference prefab on to the Hierarchy tab (or directly to the Scene view, but make sure to set its root transform position to (0, 0, 0)).
  3. Green rectangle appears in the Scene view, representing a TV screen.

Now you can place and scale your assets appropriately using TVScreenReference as a reference object. This mock TV screen sits at (0, 0, 0) coordinates and is 1.92 x 1.08 units in size.

Note When running on a device, once the screen corners get recognized, Unity local coordinate system origin (i.e. (0, 0, 0) point) translates to the center of recognized TV screen. In other words, when you're working in the Unity Editor, treat zero-point coordinates as a center of a TV in an actual experience.

User content will match to the apparent TV screen size when running on a device by adjusting ARCamera FOV.

It is possible to set content scale and position in an absolute real world metric when using ARCore/ARKit. In this case, objects will appear having real world size specified in meters.

Real Space Placement

If you want to represent content scale and position using real world units, you will need to attach Real Space Placement component to that object.

  1. Select the object.
  2. In its Inspector click ‘Add Component’.
  3. Search for ‘Real Space Placement’ script and select it.

Once it’s added, you will notice a couple of available options for this component:

  • Surface Anchor Type

    • None Objects will stay firm in their position.

    • Stick to Lower Furthest plane Objects will move to the recognized surface below their initial position. In case if system recognizes another surface below the original one, the objects will descent further down to the lower one.

      • Stick to Lower Nearest plane Objects will stay on the nearest recognized surface below its initial position. If new surface appears higher than original one, objects will jump up to he higher plane.
  • Rotate To Match Surface Orientation

    • If selected, objects will align vertically with the recognized surface.
    • Otherwise, TV screen orientation will be used as a basis.

Note: When Real Space Placement is attached to a game object, its scale and position are represented in *meters*. 1 unit = 1 meter

A cube with scale (0.05, 0.05, 0.05) will have the size of 5x5x5cm.`

Synchronizable content

Right out of the box ARTV SDK supports 5 types of content that can be synchronized to the video, while we add more types with the next SDK updates.

  • Animator
  • AudioSource
  • ParticleSystem
  • VideoPlayer
  • PlayableDirector (Timeline)

This means that not only objects containing these components will become active when video reaches a certain point, but also their content (animations, audio, etc) will be synchronized with the video timeline. For example, if the video gets recognized in the middle of some contents interval, then the animations, sound or particle effects will catch up with the video.

Synchronization applies to GameObjects that have TimelineContentSynchronization component attached to them and to all their children hierarchically.

Furthermore, any MonoBehaviour component can become synchronizable if it implements ISynchronizableContent interface.

In case you want to keep certain synchronizable content from being synced with the video, attach IgnoreSynchronization component to that object.

By default IgnoreSynchronization affects only particular GameObject it is attached to and does not have influence on its child objects. In case you need to stop synchronization on a particular GameObject and all of its children ApplyToAllChildrenInHierarchy should be set to true.

Note Unity Timeline rewinds content in a similar fashion to ARTV one, so components controlled by Unity Timeline should have IgnoreSynchronization component.

ISynchronizableContent interface

Implementing this interface in a MonoBehaviour will enable it for synchronization within ARTV system.

ISynchronizableContent has the following methods:

  • Synchronize(float time)
  • Reset()
  • Play()
  • Pause()

Most of these are self-explanatory, however Synchronize(time) may need additional explanation regarding its time argument. The system synchronizes content using interval’s local time. This means that whenever system reaches some interval defined in TimelineContentSynchronization it will start synchronizing its content from time=0.