Introduction

8th Wall enables developers to create, collaborate and publish Web AR experiences that run directly in a mobile web browser.

Built entirely using standards-compliant JavaScript and WebGL, 8th Wall Web is a complete implementation of 8th Wall's Simultaneous Localization and Mapping (SLAM) engine, hyper-optimized for real-time AR on mobile browsers. Features include World Tracking, Image Targets, and Face Effects.

The 8th Wall Cloud Editor allows you to develop fully featured Web AR projects and collaborate with team members in real time. Built-In Hosting allows you to publish projects to multiple deployment states hosted on 8th Wall's reliable and secure global network, including a password-protected staging environment. Self-Hosting is also available.

8th Wall Web is easily integrated into 3D JavaScript frameworks such as:

Quick Start Guide API Reference Need Help?

Tutorial:

What's New

8th Wall Web Release 15 is now available! This release provides a number of updates and enhancements:

Release 15: (2020-October-09, v15.0.9.487 / 2020-September-22, v15.0.8.487)

  • New Features:

    • 8th Wall Curved Image Targets:

      • Added support for cylindrical image targets such as those wrapped around bottles, cans and more.
      • Added support for conical image targets such as those wrapped around coffee cups, party hats, lampshades and more.
  • Fixes and Enhancements:

    • Improved tracking quality for SLAM and Image Targets.
    • Fixed an issue with MRCS Holograms and device routing on iOS 14.
    • Fixed an issue with Face Effects and Image Targets where updates to mirroredDisplay were not reflected during runtime.
    • Improved experience for some Android devices with multiple cameras. (v15.0.9.487)
    • Fixed a raycasting issue with AFrame 1.0.x (v15.0.9.487)
  • XRExtras Enhancements:

    • New AFrame components for easy Curved Image Target development:

      • 3D container prefab component that forms a portal-like container that 3D content can be placed inside.
      • Video playback prefab component for easily enabling video on curved image targets.
    • Improved detection of Web Share API Level 2 support.

Click Here for to see a full list of changes.

Requirements

Web Browser Requirements

Mobile browsers require the following functionality to support 8th Wall Web experiences:

  • WebGL (canvas.getContext('webgl') || canvas.getContext('webgl2'))
  • getUserMedia (navigator.mediaDevices.getUserMedia)
  • deviceorientation (window.DeviceOrientationEvent - only needed if SLAM is enabled)
  • Web-Assembly/WASM (window.WebAssembly)

NOTE: 8th Wall Web experiences must be viewed via https. This is required by browsers for camera access.

This translates to the following compatibility for iOS and Android devices:

  • iOS:

    • Safari (iOS 11+)
    • Apps that use SFSafariViewController web views (iOS 13+)

      • Apple added getUserMedia() support to SFSafariViewController in iOS 13. 8th Wall works within iOS 13 apps that use SFSafariViewController web views.
      • Examples: Twitter, Slack, Discord, Gmail, Hangouts, and more.
  • Android:

    • Browsers known to natively support the features required for WebAR:

      • Chrome
      • Firefox
      • Samsung Internet
      • Microsoft Edge
      • Brave
    • Apps using Web Views known to support the features required for WebAR:

      • Twitter, WhatsApp, Slack, Gmail, Hangouts, Reddit, LinkedIn, and more.

Link-out support

For apps that don’t natively support the features required for WebAR, our XRExtras library provides flows to direct users to the right place, maximizing accessibility of your WebAR projects from these apps.

Examples: Instagram, Snapchat, Facebook, WeChat

Screenshots:

Launch Browser from Menu (iOS) Launch Browser from Button (Android) Copy Link to Clipboard

Supported Frameworks

8th Wall Web is easily integrated into 3D JavaScript frameworks such as:

Supported Features

Platform Lighting AR Background Camera Motion Horizontal Surfaces Vertical Surfaces Image Detection & Tracking World Points Hit Tests
8th Wall Web Yes Yes 6 DoF (Scale Free) Yes, Instant Planar No Yes Yes Yes

Quick Start Guide

This guide provides all of the steps required to get you up and running with the 8th Wall Cloud Editor and Built-in Hosting platform.

Create an 8th Wall Account

Creating an 8th Wall Account gives you the ability to:

  • Create rich Web AR experiences that run directly in a mobile web browser.
  • Collaborate with team members and store code in source control.
  • Instantly preview projects as you develop.
  • Wirelessly debug your code in real time with live console logs from multiple devices.
  • Publish projects hosted on 8th Wall's global network.
  • Manage subscriptions, billing information and licenses for commercial projects.

New Users: Create an account at https://www.8thwall.com/sign-up

Existing Users: Login at https://www.8thwall.com/login using your email address and password.

Start Free Trial

The 8th Wall Cloud Editor and Built-in Hostig platform are available to Agency and Business workspaces. 8th Wall offers a 14-day free trial so you can get access to the full power of 8th Wall and begin building WebAR experiences.

If you would like to upgrade to a paid plan after sign-up, please refer to the Upgrade Plan section of the documentation for more information on upgrades.

  1. Enter your email and click Start Trial

TrialStart

  1. Create your account by entering your personal details into the form and set a password.

TrialCreateAccount

  1. Confirm your email address. An email will be sent with a verification code. Enter the verification code and click Confirm.

TrialConfirmEmail

  1. Enter payment details and select plan. NOTE: You will NOT be charged anything at this time. You can cancel at any time during the 14-day free trial period to avoid charges.

  2. Review and confirm. Click Start free trial to continue and activate your 14-day free trial.

Create your workspace

  1. On the following screen, enter a descriptive workspace name. Most people use their company name

  2. Select a workspace URL. Pick something relevant to your workspace name. IMPORTANT if you use 8th Wall hosting, this value will be used by default as the sub-domain in your URL (e.g. mycompany.8thwall.app/project-name). You do have the ability to connect custom domains later.

CreateYourWorkspace

Start a new project

  1. From the Homepage (logged in) or Workspace Dashboard, click "Start a new Project"

StartNewProject

  1. Enter Basic info for the project: Please provide a Title, URL, Description (optional) and Cover Image (optional). All of these fields, except URL, can be edited later in the Project Settings page.

  2. Select a Project Type:

  • Non-Commercial: Your Agency or Business plan includes unlimited non-commercial projects. If you're not yet ready to begin development on a commercial project, select Non-commercial from the dropdown to create a project for pitching or demoing purposes only. Non-commercial projects can be promoted to commercial later once you have upgraded to a paid plan.

  • Commercial: If you're ready to begin development on a commercial project, choose Commercial. Select a Commercial Use Agreement and complete the wizard to purchase a DEVELOP license. NOTE: Commercial projects cannot be purchased during a free trial. If you need to purchase a commercial license, you can end your free trial and begin your Agency or Business subscription.

NewProjectBasicInfo

Clone template project

  1. After you have created a project, you'll be taken to the Cloud Editor. Select a template to clone. In this example, we'll select "A-Frame: Place Ground". This interactive example allows the user to grow trees on the ground by tapping. This showcases raycasting, instantiating objects, importing 3D models and the animation system.

EditorCloneProject

  1. Click the Load Project button.

Live Preview

  1. At the top of the Cloud Editor window, click the Preview button.

  2. Scan the QR code with your mobile device to open a web browser and look at a live preview of the WebAR project.

GettingStartedPreview

  1. When the page loads, you'll be prompted for access to motion and orientation sensors (on some devices) and the camera (all devices). Click Allow for all permission prompts. You will be taken to the private development URL for this project.

  2. When the WebAR preview loads, tap on the groun to spawn trees.

  3. Result:

PlaceGround

Publish your project

At this point, you have a fully operational WebAR project and have previewed it on your own device. Next, publish your demo project using 8th Wall's Built-in Hosting so that it can be viewed publicly by anyone on the internet.

Note: Non-commercial projects can only be used for demo purposes. Commercial projects require additional commercial licenses. See https://www.8thwall.com/pricing for more info.

  1. At the top right of the Cloud Editor window, click Publish

  2. You will see a list of commits (in this case there is only one - the initial clone) as well as the Development, Staging and Public URLs for the project. Promote both Staging and Public to the first commit in the list by selecting both radio buttons.

  3. Click Publish

GettingStartedPublish

View the public project

  1. Go back to the Project Dashboard in the left nagivation. In the QR 8.code section, the Public project URL will be displayed along with both an 8th.io shortlink and associated QR code.

  2. Scan the QR code with your mobile device to view the Public WebAR experience.

Overview

8th Wall is a complete Web AR solution that allows you to create, collaborate and publish Web AR experiences that run directly in a mobile web browser.

Create an 8th Wall Account to:

  • Create rich Web AR experiences that run directly in a mobile web browser.
  • Collaborate with team members and store code in source control.
  • Instantly preview projects as you develop.
  • Wirelessly debug your code in real time with live console logs from multiple devices.
  • Publish projects hosted on 8th Wall's global network.
  • Manage subscriptions, billing information and licenses for commercial projects.

New Users: Create an account at https://www.8thwall.com/sign-up

Existing Users: Login at https://www.8thwall.com/login using your email address and password.

Homepage

The 8th Wall homepage, when logged in, provides access to all of your workspaces and recent projects. Select a Workspace or Project to access its dashboard.

Homepage

Homepage guide:

  1. Start a new project
  2. User Settings (Profile, Manage Workspaces, Logout)
  3. Workspace Type
  4. Workspace
  5. Project
  6. Workspace the project belongs to
  7. Project commercial status
  8. Project shortcuts

Workspaces

A Workspace is a logical grouping of Projects, Users, and Billing. Workspaces can contain one or more Users, each with different permissions. Users can belong to multiple Workspaces.

The Workspace dashboard allows you to:

  • Create new Projects.
  • Manage existing projects.
  • Manage workspace team members and permissions.
  • Manage subscriptions and commercial licenses.

Initial Workspace

When creating a new 8th Wall account directly from 8thwall.com, you will start with a workspace with a 14-day free trial.

If signing up via an invitation from another 8th Wall user, you will be added as a team member of their existing workspace.

Select Workspace

To select a workspace, perform one of the following:

  1. Navigate to https://www.8thwall.com and login. The Home page will display a carousel of all the workspaces you are a member of. Click a workspace card to select.

WorkspaceCarousel

  1. Click on your name at the top-right of the page and select Workspaces

ConsoleWorkspaceMenu

Create Workspace

  1. Under your name at the top right, select "Manage Workspaces" (or navigate directly to https://www.8thwall.com/workspaces)
  2. Click "Create a New Workspace"
  3. Select Workspace Type
  4. Click Create

ConsoleWorkspaceCreate

Teams

Each Workspace has a team containing one or more Users, each with different permissions. Users can belong to multiple Workspace teams.

Add others to your team to allow them to access the Projects in your workspace. This allows you to collaboratively create, manage, test and publish Web AR projects as a team.

Invite Users

  1. Select your workspace.
  2. Click Team in the left navigation
  3. Enter the email address(es) for the users you want to invite. Enter multiple emails separated by commas.
  4. Click Invite users

ManagePlans

User Roles

Team members can have one of three roles:

  • OWNER
  • ADMIN
  • DEV

Capabilities for each role:

Capability OWNER ADMIN DEV
Projects - View X X X
Projects - Create X X X
Projects - Edit X X X
Projects - Delete X X X
Authorize Devices X X X
Teams - View Users X X X
Teams - Invite Users X X
Teams - Remove Users X X
Teams - Manage User Roles X X
Workspaces - Create X X X
Workspaces - Edit X
Workspaces - Manage Plans X
Edit Profile X X X

User Handles

Each user in your workspace has a handle. Workspace handles will be the same as the User Handle defined in a user's profile unless already taken or customized by a user.

Handles are used as part of the URL (in the format "handle-client-workspace.dev.8thwall.app") to preview new changes when developing with the 8th Wall Cloud Editor.

Example: tony-default-mycompany.dev.8thwall.app

Important

  • Before changing your handle, make sure all work in the Cloud Editor is saved.
  • Any of your unlanded changes to projects in the workspace will be abandonded.
  • Any clients you created in projects in the workspace will be deleted.

Modify User Handle

  1. Select your workspace.
  2. Click Team in the left navigation
  3. Enter a new Handle.
  4. Click ✔ to save.
  5. Confirm you

Change Handle

Account Settings

The Account page allows you to:

Upgrade Plan

Please refer to https://www.8thwall.com/pricing for detailed information on plans and pricing.

For licensing inquiries, please contact the 8th Wall team by filling out the form at https://www.8thwall.com/licensing

To Upgrade to an Agency or Business plan:

  1. Select your workspace.
  2. Click Account in the left navigation
  3. Click the Upgrade button of the desired plan
  4. Completely fill out the form.
  5. Click Complete Purchase to activate your Agency or Business subscription. Monthly plan subscription plus any new commercial licenses will be charged to this payment method unless otherwise specified.

As part of the upgrade process, if you haven't already, you may be asked to select a Workspace Name and Workspace URL:

AccountUpgrade

  • Workspace Name: A descriptive name for your workspace (e.g. "Acme Inc.")
  • Workspace URL: This value is used as part of the URL to access your 8th Wall workspace and related resources. It is also used as the subdomain in default URLs to 8th Wall hosted projects. This value is automatically generated from the Workspace Name, but can be customized. This cannot be changed later

    • Workspace URL example: www.8thwall.com/acme/
    • Hosted Project URL example: acme.8thwall.app/my-web-app

Cancel Free Trial

NOTE: If you are on a 14-day free trial, at the end of the trial period you account will automatically upgrade to a paid plan. Cancel online before the end of the trial period to avoid being charged for the monthly subscription.

To cancel during Free Trial:

  1. Select your workspace.
  2. Click Account in the left navigation.
  3. The Account page will display your current plan.
  4. Click Cancel my free trial

CancelTrial

Cancel Plan

To cancel an existing Agency or Business plan:

  1. Select your workspace.
  2. Click Account in the left navigation.
  3. The Account page will display your current plan.
  4. Click Turn off to disable auto-renew for your subscription. Confirm whether the subscription should be cancelled immediately or at the end of the current billing period

Note: You cannot cancel an Agency or Business scription if the workspace has any actice commercial apps.

AccountCancel

Update Billing Information

To update account billing information:

  1. Select your workspace.
  2. Click Account in the left navigation.
  3. In the Account Information section, click Edit
  4. Enter your new account billing information and click Update to save changes.

Updated account billing information will be used in future invoices.

Manage Commercial Licenses

Commercial licenses and their payment methods can be managed from the Account page of your workspace. This section will only be displayed if you have active commercial licenses.

  1. Select your workspace.
  2. Click Account in the left navigation.

Commercial Projects

Cancel an active commercial license

IMPORTANT: Cancelling the license for an active commercial project will disable it and the WebAR project can no longer be viewed. This action cannot be undone!

  1. Click Edit.
  2. Click the "X" next to the commercial project to cancel.
  3. A warning dialog will be displayed.
  4. Type 'REMOVE' to confirm you want to cancel and click "OK".

Change payment method for an active commercial license

  1. Click Edit
  2. To the right of the commercial license, you'll see a down arrow. Click it to display a list of available payment methods, and select a new one.
  3. Click Done**

Commercial Projects

Billing Summary / Invoices

The Billing Summary section of the Account page allows you to view and download invoices, and make payments for any outstanding invoices. Billing Summary displays:

  • Invoice Number (click to download PDF invoice)
  • Date
  • Invoice Total
  • Amount Paid
  • Balance Due
  • Invoice Status

Commercial Projects

Projects

This section decribes how to create, manage and publish WebAR projects.

Create Project

  1. From the Homepage (logged in) or Workspace Dashboard, click "Start a new Project"

  2. Select the workspace for this project.

  3. Enter Basic info for the project: If you are on a Free plan, give the Project a name and click Create. If you are on an Agency or Business plan, please enter: Title, URL, Description (optional) and Cover Image (optional). All of these fields, except URL, can be edited later in the Project Settings page.

  4. Select a Project Type:

  • Non-Commercial: Your Agency or Business plan includes unlimited non-commercial projects. If you're not yet ready to begin development on a commercial project, select Non-commercial from the dropdown to create a project for pitching or demoing purposes only. Non-commercial projects can be promoted to commercial later.

  • Commercial: If you're ready to begin development on a commercial project, choose Commercial. Select a Commercial Use Agreement and complete the wizard to purchase a DEVELOP license.

Project Dashboard

The project dashboard is your hub for managing 8th Wall projects. From the project dashboard page you can manage project settings, 8th Wall Code Editor, activate commercial licenses, manage image targets and more.

The direct URL to your Project Dashboard is in the format: www.8thwall.com/workspacename/projectname

Project Dashboard Overview

ProjectDashboardOverview

  1. Project Dashboard
  2. Device Authorization
  3. Open Editor
  4. Code Editor
  5. Project History
  6. Image Targets
  7. Project Settings
  8. Commercial License Status
  9. Image Targets
  10. Connected Domains
  11. Campaign Duration
  12. Campaign Redirect URL
  13. QR Code and Embeds
  14. Usage and Recent Trends

Project License

8th Wall Projects can have a status of Non-commercial, DEVELOP, or LAUNCH.

Non-Commercial projects are intended for pitching and demo purposes only.

Once you begin work on a commercial project, you must obtain a commercial license.

When you start development on a commercial project, you must promote your project to a DEVELOP license.

When the project is ready to launch, you must promote it to a LAUNCH license.

Note: You can promote your project to the next license at any time, but you may not demote it.

Managing Image Targets

To manage image targets for a given Project, click either the Image Target icon in the left navigation, or the "Manage Image Targets" link on the Project Dashboard.

ManageImageTargets

For detailed information on Image Targets, please refer to the Image Target documentation.

Connected Domains

8th Wall allows you to use custom domains for both Self-Hosted projects as well as 8th Wall hosted projects.

Self Hosted Projects

If you have upgraded to a paid Agency or Business plan, you can host your WebAR project publicly on your own web server (and view without device authorization). In order to do so, you will need to specify a list of approved URLs that are approved to host your Project.

  1. From the Project Dashboard page, select "Manage domains"

  2. Expand "I am hosting this project myself"

  3. Enter the domains where you will be self-hosting your project. A domain may not contain a wildcard, path, or port. Click the "+" to add multiple.

Note: Self-Hosted domains are subdomain specific - e.g. "mydomain.com" is NOT the same as "www.mydomain.com". If you will be hosting at both mydomain.com and www.mydomain.com, you must specify BOTH.

SelfHostedDomainList

8th Wall Hosted Project

If you are using the Cloud Editor to develop your WebAR project you can take advantage of 8th Wall's Built-In Hosting.

By default, 8th Wall provides 8thwall.app URLs (e.g. myworkspace.8thwall.app/my-project-name) for hosted projects.

If you have your own domain and want to use it with an 8th Wall hosted project, you can connect your domain to your 8th Wall project (or workspace) while keeping it registered with its current registrar. To do so you'll need to update your domain's DNS settings.

NOTE: It is recommended that you use a subdomain (e.g. ar.mydomain.com) instead of the root domain (e.g. mydomain.com) as not all DNS providers support CNAME or ALIAS records for the root domain. Please contact your DNS provider to see if they support CNAME or ALIAS records for the root domain.

  1. From the Project Dashboard page, select "Manage domains"

  2. Expand "I am hosting this project on 8th Wall"

  3. Enter your custom domain (e.g. www.mydomain.com), and optionally any additional domains you want redirected to your custom domain.

ConnectedDomains

  1. Click Connect. This operation can take a minute or two. Click the "Refresh status" button if needed.

  2. Verify ownership of your domain. In order to verify that you are the owner of the custom domain, you must login to your DNS registrar's website and add a verification record to your domain. These changes can take up to 24 hours to propagate.

  3. Once verification is complete, add DNS records to connect your domain(s) to your project.

Campaign Duration

Launched campaigns, by default, will run indefinitely until you decide to end the campaign. Ending a campaign will remove its commercial license and the WebAR project can no longer be viewed.

Campaign Duration settings can be managed from the Project Dashboard. The following options are available:

  • Ongoing: The campaign will run indefinitely and you will be billed for a monthly LAUNCH license. The date/time of the next renewal will be displayed.
  • End after current billing cycle: The campaign will run through the current billing period, and then end.
  • Schedule an end date and time: Select a custom date and time for the campaign to end.

To modify, Click "Edit". Make your changes and click "Update" to save your settings.

To cancel the campaign immediately, visit the workspace Account page and manage commercial licenses.

Campaign Redirect URL

When a launched project is cancelled or completed, the WebAR project can no longer be viewed. Users visiting the site will see an error message stating that the project is no longer available. It is a best practice to redirect users to another URL once your campaign is over.

Specify a Campaign Redirect URL to automatically redirect your users to a different site when your campaign has ended.

Campaign Redirect URLs are supported with both 8th Wall hosted and Self-hosted Projects.

From the Project Dashboard, click "Connect a URL" and enter the desired redirect URL

QR 8Code

As a convenience, 8th Wall branded QR codes (aka "8 Codes") can be generated for a Project, making it easy to scan from mobile device to access your WebAR project. You are always welcome to generate your own QR codes, or use third-party QR code generation websites or services.

An "8th.io" shortlink will also be generated.

To generate a QR code, enter the desired URL and click Connect.

ProjectDashboardOverview

The generated QR code can be downloaded in either PNG or SVG format to be included on a website, physical media, or other locations to make it easy for users to scan with their smartphones to visit the connected URL.

Example: ProjectDashboardOverview

8th Wall Projects provide basic usage analytics so that you can see how many times it has been viewed in the past 30 days. The usage graph is a rolling 30-day window and can display either total or daily usage during that time period.

ProjectDashboardOverview

Projects with usage based commercial licenses will also display view counts for the current billing period. Usage is measured in 100 view increments. Usage from previous months can be found in the Billing Summary of the Account page.

Project Settings

The Project Settings page allows you to:

  • Set developer preferences, such as Keybindings and Dark mode
  • Edit Project information:

    • Title
    • Description
    • Enable/Disable default splash screen
    • Update cover image
  • Manage staging passcode
  • Whitelist domains for self-hosting
  • Access the Project's App Key string
  • Set engine version
  • Unpublish app
  • Temporarily disable project
  • Delete project

Code Editor Preferences

The following Code Editor preferences can be set:

  • Dark Mode (On/Off)

    • Use a darker color palette in the Code Editor that uses darker background colors and lighter foreground colors.
  • Keybindings

    • Enable keybindings from popular text editors. Select from:

      • Normal
      • Sublime
      • Vim
      • Emacs
      • VSCode

Basic Information

Project Settings allows you to edit the Basic Information for your Project

  • Project Title

  • Description

  • Enable/Disable default splash screen

  • Update cover image

Staging Passcode

When your app is staged to XXXXX.staging.8thwall.app (where XXXX represents your Workspace URL), it is passcode protected. In order to view the WebAR Project a user must first enter the passcode you provide. This is a great way to preview projects with clients or other stakeholders prior to launching publicly.

A passcode should be 5 or more characters and can include letters (A-Z, lower or upper case), numbers (0-9) and spaces.

Self Hosted Domains

If you have upgraded to an Agency or Business plan, you can host your Web Application publicly on your own web server (and viewed without device authorization). In order to do so, you will need to specify a list of approved URLs that are approved to host your Project.

  1. From the Project Dashboard page, select "Manage domains".

  2. Expand "I am hosting this project myself"

  3. Enter the domains where you will be self-hosting your project. A domain may not contain a wildcard, path, or port. Click the "+" to add multiple.

Note: Self-Hosted domains are subdomain specific - e.g. "mydomain.com" is NOT the same as "www.mydomain.com". If you will be hosting at both mydomain.com and www.mydomain.com, you must specify BOTH.

SelfHostedDomainList

App Key

If you are building a Self-hosted Project, you'll need to add your App Key to the project.

Click the Copy button and then paste it into your index.html

Example:

<script src="//apps.8thwall.com/xrweb?appKey=XXXXXXXXX"></script>

(Replace the XXX's with your App Key string)

Engine Version

NOTE: This is only available to workspaces on Agency or Business plans.

For each public Project, you can specify the version of the XR engine used when serving public web clients.

If you select a Channel (release or beta), public clients will always be served the most recent version of 8th Wall Web from that channel. If you freeze the version, you will need to manually unfreeze to receive the latest features and improvements of the engine.

In general, 8th Wall recommends using the official release channel for production web apps.

If you would like to test your web app against a pre-release version of 8th Wall Web, which may contain new features and/or bug fixes that haven't gone through full QA yet, you can switch to the beta channel:

To Freeze to a specific version, select the desired Channel (release or beta) and click the Freeze button

To Re-join a Channel and stay up-to-date, click the Unfreeze button. This will unfreeze the Engine Version associated with your Project and re-join a Channel (release or beta).

Unpublish App

Unpublishing your project will remove it from staging (XXXXX.staging.8thwall.app) or public (XXXXX.8thwall.app).

You can publish it again at any time from the Code Editor or Project History pages.

Click Unpublish Staging to take your Project down from XXXXX.staging.8thwall.app

Click Unpublish Public to take your Project down from XXXXX.8thwall.app

Temporarily Disable Project

If you disable your project, your app will not be viewable. Views will not be counted while disabled.

You will still be charged for any active commercial licenses on projects that are temporily disabled.

Toggle the slider to Disable / Enable your project.

Delete Project

A project with a commercial license cannot be deleted. Visit the Account page to cancel an active commercial project.

Deleting an Project will cause it to stop working. You cannot undo this operation.

Image Targets

Image Target Overview

Bring signage, magazines, boxes, bottles, cups, and cans to life with 8th Wall Image Targets. 8th Wall Web can detect and track flat, cylindrical and conical shaped image targets, allowing you to bring static content to life.

Not only can your designated image target trigger a web AR experience, but your content also has the ability to track directly to it.

Flat image targets can work in tandem with our World Tracking (SLAM), enabling experiences that combine image targets and markerless tracking.

You may track up to 5 image targets simultaneously with World Tracking enabled or up to 10 when it is disabled.

Up to 5 image targets per project can be "Autoloaded". An Autoloaded image target is enabled immediately as the page loads. This is useful for apps that use 5 or fewer image targets such as product packaging, a movie poster or business card.

The set of active image targets can be changed at any time by calling XR8.XrController.configure(). This lets you manage hundreds of image targets per project making possible use cases like geo-fenced image target hunts, AR books, guided art museum tours and much more. If your project utilizes SLAM most of the time but image targets some of the time, you can improve performance by only loading image targets when you need them. You can even read uploaded target names from URL parameters stored in different QR Codes, allowing you to have different targets initially load in the same web app depending on which QR Codes the user scans to enter the experience.

Image Target Types

Flat FlatTarget Track 2D images like posters, signs, magazines, boxes, etc. Flat image targets can be used in tandem with World Tracking (SLAM).
Cylindrical CylindricalTarget Track images wrapped around cylindrical items like cans and bottles.
Conical ConicalTarget Track images wrapped around objects with different a top vs bottom circumference like coffee cups, etc.

Image Target Requirements

  • File Types: .jpg, .jpeg or .png
  • Dimensions:

    • Minimum: 480 x 640 pixels
    • Maximum length or width: 2048 pixels.

      • Note: If you upload something larger, the image is resized down to a max length/width of 2048, maintaining aspect ratio.)

Image Target Quantities

You may track up to 5 image targets simultaneously while World Tracking (SLAM) is running. If you disable World Tracking (SLAM) by setting "disableWorldTracking: true", specify your image target set programmatically, you may track up to 10 simultaneously.

  • Active images per Project (World Tracking enabled): 5
  • Active images per Project (World Tracking disabled): 10
  • Uploaded images per Project: Up to 1,000

Manage Image Targets

Click the Image Target icon in the left navigation or the "Manage Image Targets" link on the Project Dashboard to manage your image targets.

ManageImageTargets

This screen allows you to create, edit, and delete the image targets associated with your project. Click on an existing image target to edit. Click the "+" icon for the desired image target type to create a new one.

ManageImageTargets2

Create Flat Image Target

  1. Click the "+ Flat" icon to create a new flat image target.

ImageTargetFlat1

  1. Upload Flat Image Target: Drag your image (.jpg, .jpeg or .png) into the upload panel, or click within the dotted region and use your file browser to select your image.

  2. Set Tracking Region (and Orientation): Use the slider to set the region of the image that will be used to detect and track your target within the WebAR experience. The rest of the image will be discarded, and the region which you specify will be tracked in your experience.

  1. Edit Flat Image Target properties:
  • (1) Give your image target a name by editing the field at the top left of the window.
  • (2) IMPORTANT! Test your image target: The best way to determine if your uploaded image will make a good or bad image target (see Optimizing Image Target Tracking is to use the Simulator to assess tracking quality. Scan the QR code with your camera app to open the simulator link, then point your device at the screen or physical object.
  • (3) Click Load automatically if you want the image target to be enabled automatically as the WebAR project loads. Up to 5 image targets can be loaded automatically without writing a single line of code. More targets can be loaded programnatically through the Javascript API.
  • (4) Optional: If you would like to add metadata to your image, in either Text or JSON format, click the Metadata button at the bottom of the window.

EditFlatImageTarget

  1. Changes made on this screen are automatically saved. Click Close to return to your image target library.

Create Cylindrical Image Target

  1. Click the "+ Cylindrical" icon to create a new flat image target.

ImageTargetFlat1

  1. Upload Flat Image Target: Drag your image (.jpg, .jpeg or .png) into the upload panel, or click within the dotted region and use your file browser to select your image.

  2. Set Tracking Region (and Orientation): Use the slider to set the region of the image that will be used to detect and track your target within the WebAR experience. The rest of the image will be discarded, and the region which you specify will be tracked in your experience.

  1. Edit Cylindrical Image Target properties:
  • (1) Give your image target a name by editing the field at the top left of the window.
  • (2) Drag the sliders until the shape of your label appears as expected in the simulator, or input the measurements directly.
  • (3) IMPORTANT! Test your image target: The best way to determine if your uploaded image will make a good or bad image target (see Optimizing Image Target Tracking is to use the Simulator to assess tracking quality. Scan the QR code with your camera app to open the simulator link, then point your device at the screen or physical object.
  • (4) Click Load automatically if you want the image target to be enabled automatically as the WebAR project loads. Up to 5 image targets can be loaded automatically without writing a single line of code. More targets can be loaded programnatically through the Javascript API.
  • (5) Optional: If you would like to add metadata to your image, in either Text or JSON format, click the Metadata button at the bottom of the window.

EditCylindricalImageTarget

  1. Changes made on this screen are automatically saved. Click Close to return to your image target library.

Create Conical Image Target

  1. Click the "+ Conical" icon to create a new flat image target.

ImageTargetFlat1

  1. Upload Conical Image Target: Drag your image (.jpg, .jpeg or .png) into the upload panel, or click within the dotted region and use your file browser to select your image. The uploaded image should be in "unwrapped", aka "rainbow" format, cropped like so:

  1. Set Large Arc Alignment: Drag the slider until the red line overlays the uploaded image's large arc.

  1. Set Small Arc Alignment: Do the same for the small arc. Drag the slider until the blue line overlays the uploaded image's small arc.

  2. Set Tracking Region (and Orientation): Drag and zoom on the image to set the portion of the image that is detected and tracked. This should be the most feature rich area of your image.

  1. Edit Conical Image Target properties:
  • (1) Give your image target a name by editing the field at the top left of the window.
  • (2) Drag the sliders until the shape of your label appears as expected in the simulator, or input the measurements directly.
  • (3) IMPORTANT! Test your image target: The best way to determine if your uploaded image will make a good or bad image target (see Optimizing Image Target Tracking is to use the Simulator to assess tracking quality. Scan the QR code with your camera app to open the simulator link, then point your device at the screen or physical object.
  • (4) Click Load automatically if you want the image target to be enabled automatically as the WebAR project loads. Up to 5 image targets can be loaded automatically without writing a single line of code. More targets can be loaded programnatically through the Javascript API.
  • (5) Optional: If you would like to add metadata to your image, in either Text or JSON format, click the Metadata button at the bottom of the window.

EditConicalImageTarget

  1. Changes made on this screen are automatically saved. Click Close to return to your image target library.

Edit Image Targets

Click on any of the image targets under My Image Targets to view and/or modify their properties:

  1. Image Target Name
  2. Sliders / Measurements (Cylindrical/Conical image targets only)
  3. Simulator QR Code
  4. Delete Image Target
  5. Load Automatically
  6. Metadata
  7. Orientation and Dimensions
  8. Autosave status
  9. Close
Type Fields
Flat
Cylindrical
Conical

Changing Active Image Targets

The set of active image targets can be modified at runtime by calling XR8.XrController.configure()

Note: All currently active image targets will be replaced with the ones specified in this list.

Example - Change active image target set

XR8.XrController.configure({imageTargets: ['image-target1', 'image-target2', 'image-target3']})

Optimizing Image Target Tracking

To ensure the highest quality image target tracking experience, be sure to follow these guidelines when selecting an image target.

DO have:

  • a lot of varied detail
  • high contrast

DON'T have:

  • repetitive patterns
  • excessive dead space
  • low resolution images

Color: Image target detection cannot distinguish between colors, so don't rely on it as a key differentiator between targets.

For best results, use flat surfaces for image target tracking.

Consider the reflectivity of your image target's physical material. Glossy surfaces and screen reflections can lower tracking quality. Use matte materials in diffuse lighting conditions for optimal tracking quality.

Note: Detection happens fastest in the center of the screen.

Good Markers Bad Markers

Image Target Events

8th Wall Web emits Events / Observables for various events in the image target lifecycle (e.g. imageloading, imagescaning, imagefound, imageupdated, imagelost) Please see the API reference for instructions on handling these events in your Web Application:

Example Projects

https://github.com/8thwall/web/tree/master/examples/aframe/artgallery

https://github.com/8thwall/web/tree/master/examples/aframe/flyer

Customizing the Load Screen

8th Wall's XRExtras library provides modules that handle the most common WebAR application needs, including the load screen, social link-out flows and error handling.

The Loading module displays a loading overlay and camera permissions prompt while libraries are loading, and while the camera is starting up. It's the first thing your users see when they enter your WebAR experience.

This section describes how to customize the loading screen by providing values that change the color, load spinner, and load animation to match the overall design of your experience.

ID's / Classes to override

Loading Screen iOS (13+) Motion Sensor Prompt
  1. #requestingCameraPermissions
  2. #requestingCameraIcon
  3. #loadBackground
  4. #loadImage
  1. .prompt-box-8w
  2. .prompt-button-8w
  3. .button-primary-8w

To customize the text, you can use a MutationObserver. Please refer to code example below.

A-Frame component parameters

If you are using XRExtras with an A-Frame project, the xrextras-loading module makes it easy to customize the load screen via the following parameters:

Parameter Type Description
cameraBackgroundColor Hex Color Background color of the loading screen's top section behind the camera icon and text (See above. Loading Screen #1)
loadBackgroundColor Hex Color Background color of the loading screen's lower section behind the loadImage (See above. Loading Screen #3)
loadImage ID The ID of an image. The image needs to be an <a-asset> (See above. Loading Screen #4)
loadAnimation String Animation style of loadImage. Choose from spin (default), pulse, scale, or none

A-Frame Component Example

<a-scene
  tap-place
  xrextras-almost-there
  xrextras-loading="
    loadBackgroundColor: #007AFF;
    cameraBackgroundColor: #5AC8FA;
    loadImage: #myCustomImage;
    loadAnimation: pulse"
  xrextras-runtime-error
  xrweb>

<a-assets>
  <img id="myCustomImage" src="assets/my-custom-image.png">
</a-assets>

Javascript/CSS method

const load = () => { 
  XRExtras.Loading.showLoading()
  console.log('customizing loading spinner')
  const loadImage = document.getElementById("loadImage")
  if (loadImage) {
    loadImage.src="img/my-custom-image.png"
  }
}
window.XRExtras ? load() : window.addEventListener('xrextrasloaded', load)

CSS example

#requestingCameraPermissions {
  color: black;
  background-color: white;
}
#requestingCameraIcon {
  /* This changes the image from white to black */
  filter: invert(1);
}

.prompt-box-8w {
  background-color: white;
  color: #00FF00;
}
.prompt-button-8w {
  background-color: #0000FF;
}

.button-primary-8w {
  background-color: #7611B7;
}

iOS (13+) Motion Sensor Prompt Text Customization

let inDom = false
const observer = new MutationObserver(() => {
  if (document.querySelector('.prompt-box-8w')) {
    if (!inDom) {
      document.querySelector('.prompt-box-8w p').innerHTML = '<strong>My new text goes here</strong><br/><br/>Press Approve to continue.'
      document.querySelector('.prompt-button-8w').innerHTML = 'Deny'
      document.querySelector('.button-primary-8w').innerHTML = 'Approve'
    }
    inDom = true
  } else if (inDom) {
    inDom = false
    observer.disconnect()
  }
})
observer.observe(document.body, {childList: true})

Customize Video Recording

8th Wall's XRExtras library provides modules that handle the most common WebAR application needs, including the load screen, social link-out flows and error handling.

The XRExtras MediaRecorder module makes it easy to customize the Video Recording user experience in your project.

This section describes how to customize recorded videos with things like capture button behavior (tap vs hold), add video watermarks, max video length, end card behavior and looks, etc.

A-Frame primitives

xrextras-capture-button : Adds a capture button to the scene.

Parameter Type Default Description
capture-mode string "standard" Sets the capture mode behavior. standard: tap to take photo, tap + hold to record video. fixed: tap to toggle video recording. photo: tap to take photo. One of [standard, fixed, photo]

xrextras-capture-config : Configures the captured media.

Parameter Type Default Description
max-duration-ms int 15000 Total video duration (in miliseconds) that the capture button allows. If the end card is disabled, this corresponds to max user record time. 15000 by default.
max-dimension int 1280 Maximum dimension (width or height) of captured video. For photo configuration, please see XR8.CanvasScreenthot.configure()
enable-end-card bool true Whether the end card is included in the recorded media.
cover-image-url string Image source for end card cover image. Uses project's cover image by default.
end-card-call-to-action string "Try it at: " Sets the text string for call to action on end card.
short-link string Sets the text string for end card shortlink. Uses project shortlink by default.
footer-image-url string Powered by 8th Wall image Image source for end card footer image.
watermark-image-url string null Image source for watermark.
watermark-max-width int 20 Max width (%) of watermark image.
watermark-max-height int 20 Max height (%) of watermark image.
watermark-location string "bottomRight" Location of watermark image. One of topLeft, topMiddle, topRight, bottomLeft, bottomMiddle, bottomRight
file-name-prefix string "my-capture-" Sets the text string that prepends the unique timestamp on file name.
request-mic string "auto" Determines if you want to set up the microphone during initialization ("auto") or during runtime ("manual")
include-scene-audio bool true If true, the A-Frame sounds in the scene will be part of the recorded output.

xrextras-capture-preview : Adds a media preview prefab to the scene which allows for playback, downloading, and sharing.

Parameter Type Default Description
action-button-share-text string "Share" Sets the text string in action button when Web Share API 2 is available (iOS 14, Android). “Share” by default.
action-button-view-text string "View" Sets the text string in action button when Web Share API 2 is not available in iOS (iOS 13). “View” by default.

XRExtras.MediaRecorder Events

XRExtras.MediaRecorder emits the following events.

Events Emitted

Event Emitted Description
mediarecorder-photocomplete Emitted after a photo is taken.
mediarecorder-recordcomplete Emitted after a video recording is complete.
mediarecorder-previewopened Emitted after recording preview is opened.
mediarecorder-previewclosed Emitted after recording preview is closed.

Example

<xrextras-capture-button capture-mode="standard"></xrextras-capture-button>

<xrextras-capture-config
  max-duration-ms="15000"
  max-dimension="1280"
  enable-end-card="true"
  cover-image-url=""
  end-card-call-to-action="Try it at:"
  short-link=""
  footer-image-url="//cdn.8thwall.com/web/img/almostthere/v2/poweredby-horiz-white-2.svg"
  watermark-image-url="//cdn.8thwall.com/web/img/mediarecorder/8logo.png"
  watermark-max-width="100"
  watermark-max-height="10"
  watermark-location="bottomRight"
  file-name-prefix="my-capture-"
></xrextras-capture-config>

<xrextras-capture-preview
  action-button-share-text="Share"
  action-button-view-text="View"
></xrextras-capture-preview>

Example - Change Action Button CSS

#actionButton {
  /* change color of action button */
  background-color: #007aff !important;
}

Advanced Analytics

8th Wall projects provide basic usage analytics, allowing you to see how many "views" you have received in the past 30 days. If you are looking for more detailed and/or historical analytics, we recommend adding 3rd party web analytics to your WebAR experience.

The process for adding analytics to a WebAR experience is the same as adding them to any non-AR website. You are welcome to use any analytics solution you prefer.

In this example, we’ll explain how to add Google Analytics to your 8th Wall project using Google Tag Manager (GTM) - making it easy to collect custom analytics on how users are both viewing and interacting with your WebAR experience.

Using GTM’s web-based user interface, you can define tags and create triggers that cause your tag to fire when certain events occur. In your 8th Wall project, fire events (using a single line of Javascript) at desired places in your code.

Analytics Pre-requisites

You must already have Google Analytics and Google Tag Manager accounts and have a basic understanding of how they work.

For more information, please refer to the following Google documentation:

Add Google Tag Manager to your 8th Wall Project

  1. On the Workspace page of your Tag Manager container, click your container ID (e.g. "GTM-XXXXXX") to open the "Install Google Tag Manager" box. This window contains the code that you’ll later need to add to your 8th Wall project.

GTM1

  1. Open the 8th Wall Cloud Editor and paste the top code block into head.html:

GTM2

  1. Click "+" next to Files, and create a new file called gtm.html, then paste the contents of the bottom code block into this file:

GTM3

  1. Add the following code towards the top of app.js:
import * as googleTagManagerHtml from './gtm.html'
document.body.insertAdjacentHTML('afterbegin', googleTagManagerHtml)

Configure Google Tag Manager

  1. Create a Google Analytics settings variable and add your Google Analytics Tracking ID. See https://support.google.com/tagmanager/answer/9207621 for more information.

Example:

GTM4

Tracking Page Views

At a minimum, create a Tag that will fire upon page load so that you can track information about visitors to your Web AR experience.

Create Tag

  • Tag Type: Google Analytics: Universal Analytics
  • Track Type: Page View
  • Google Analytics Settings: (Select variable created in previous step)
  • Triggering: All Pages

GTM5

Tracking Custom Events

GTM also provides the ability to fire events when custom actions take place inside the WebAR experience. These events will be particular to your WebAR project, but some examples might be:

  • 3D object placed
  • Image Target found
  • Screenshot taken
  • etc…

In this example, we’ll create a Tag (with Trigger) and add it to the "AFrame: Place Ground" sample project that fires each time a 3D model is spawned.

Create Custom Event Trigger

  • Trigger Type: Custom Event
  • Event Name: placeModel
  • This trigger fires on: All Custom Events

GTM6

Create Tag

Next, create a tag that will fire when the "placeModel" trigger is fired in your code.

  • Tag Type: Google Analytics: Universal Analytics
  • Track Type: Event
  • Google Analytics Settings: (Select variable created previously)
  • Triggering: Select "placeModel" trigger created in the previous step.

GTM7

IMPORTANT: Make sure to save all triggers/tags created and then Submit/Publish your settings inside the GTM interface so they are live. See https://support.google.com/tagmanager/answer/6107163

Fire Event Inside 8th Wall Project

In your 8th Wall project, add the following line of javascript to fire this trigger at the desired place in your code:

window.dataLayer.push({event: 'placeModel'})

Example - based on https://www.8thwall.com/8thwall/placeground-aframe/master/tap-place.js

export const tapPlaceComponent = {
  init: function() {
    const ground = document.getElementById('ground')
    ground.addEventListener('click', event => {
      // Create new entity for the new object
      const newElement = document.createElement('a-entity')

      // The raycaster gives a location of the touch in the scene
      const touchPoint = event.detail.intersection.point
      newElement.setAttribute('position', touchPoint)

      const randomYRotation = Math.random() * 360
      newElement.setAttribute('rotation', '0 ' + randomYRotation + ' 0')

      newElement.setAttribute('visible', 'false')
      newElement.setAttribute('scale', '0.0001 0.0001 0.0001')
      
      newElement.setAttribute('shadow', {
        receive: false,
      })
      
      newElement.setAttribute('class', 'cantap')
      newElement.setAttribute('hold-drag', '')

      newElement.setAttribute('gltf-model', '#treeModel')
      this.el.sceneEl.appendChild(newElement)

      newElement.addEventListener('model-loaded', () => {
        // Once the model is loaded, we are ready to show it popping in using an animation
        newElement.setAttribute('visible', 'true')
        newElement.setAttribute('animation', {
          property: 'scale',
          to: '7 7 7',
          easing: 'easeOutElastic',
          dur: 800,
        })
        
        // **************************************************
        // Fire Google Tag Manager event once model is loaded
        // **************************************************
        window.dataLayer.push({event: 'placeModel'})
      })
    })
  }
}

Asset Bundles

The Asset bundle feature of 8th Wall's Cloud Editor allows for the use of multi-file assets. These assets typically involve files that reference each other internally using relative paths. ".glTF", ".hcap", ".msdf" and cubemap assets are a few common examples.

In the case of .hcap files, you load the asset via the "main" file, e.g. "my-hologram.hcap". Inside this file are many references to other dependent resources, such as .mp4 and .bin files. These filenames are referenced and loaded by the main file as URLs with paths relative to the .hcap file.

AssetBundleGif

Create Asset Bundle

  1. Prepare your files

Use one of the following methods to prepare your files before upload:

  • Multi-select the individual files from your local filesystem
  • Create a ZIP file.
  • Locate the directory containing all of the files needed by your asset (Note: Directory upload not supported on all browsers!)
  1. Create New Asset Bundle

Option 1:

In the Cloud Editor, click the "+" to the right of ASSETS and select "New asset bundle". Next, select asset type. If you aren't uploading a glTF or HCAP asset, select "Other".

NewAssetBundle

Option 2:

Alternatively, you can drag the assets or ZIP directly into the ASSETS pane at the bottom-right of the Cloud Editor.

NewAssetBundleDrag

  1. Preview Asset Bundle

After the files have been uploaded, you'll be able to preview the assets before adding it to your project. Select individual files in the left pane to preview them on the right.

NewAssetBundlePreview

  1. Select "main" file

If your asset type requires you reference a file, set this file as your "main file". If your asset type requires you reference a folder (cubemaps, etc), set "none" as your "main file".

Note: This step is not required for glTF or HCAP assets. The main file is set automatically for these asset types.

The main file cannot be changed later. If you select the wrong file, you'll have to re-upload the asset bundle.

  1. Set Asset bundle name

Give the asset bundle a name. This is the filename by which you'll access the asset bundle within your project.

  1. Click "Create Bundle"

The upload of your asset bundle will be completed and it will be added to your Cloud Editor project.

Preview Asset Bundle

Assets can be previewed directly within the Cloud Editor. Select an asset on the left to preview on the right. You can preview a specific asset inside the bundle by expanding the "Show contents" menu on the right and selecting an asset inside.

AssetBundlePreview

Rename Asset Bundle

To rename an asset, click the "down arrow" icon to the right of your asset and choose Rename. Edit the name of the asset and hit Enter to save. Important: if you rename an assset, you'll need to go through your project and make sure all references point to the updated asset name.

Delete Asset Bundle

To delete an asset, click the "down arrow" icon to the right of your asset and choose Delete.

Referencing Asset Bundle

To reference the asset bundle from an html file in your project (e.g. body.html), simply provide the appropriate path to the src= or gltf-model= parameter.

To reference the asset bundle from javascript, use require()

Example - html

<!-- Example 1 -->
<a-assets>
  <a-asset-item id="myModel" src="assets/sand-castle.gltf"></a-asset-item>
</a-assets>
<a-entity 
  id="model"
  gltf-model="#myModel"
  class="cantap"
  scale="3 3 3"
  shadow="receive: false">
</a-entity>


<!-- Example 2 -->
<holo-cap 
  id="holo" 
  src="./assets/my-hologram.hcap"
  holo-scale="6"
  holo-touch-target="1.65 0.35"
  xrextras-hold-drag
  xrextras-two-finger-rotate 
  xrextras-pinch-scale="scale: 6">
</holo-cap>

Example - javascript

const modelFile = require('./assets/my-model.gltf')

Working with iframes

Starting with iOS 9.2, Safari blocked deviceorientation and devicemotion event access from cross-origin iframes.

This prevents 8th Wall Web (if running inside the iframe) from receiving necessary deviceorientation and devicemotion data required for proper tracking if SLAM is enabled. (See Web Browser Requirements. The result is that the orientation of your digital content will appear to be wrong, and the content will "jump" all over the place when you move the phone.

If you have access to the parent window, it's possible to add a script on the parent page that will send custom messages containing deviceorientation and devicemotion data to 8th Wall's AR Engine inside the iframe via JavaScript's postMessage() method. The postMessage() method safely enables cross-origin communication between Window objects; e.g., between a page and an iframe embedded within it. (See https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage)

For maximum compatibility with iOS devices, we have created two scripts:

For the OUTER website

iframe.js must be included in the HEAD of the OUTER page via this script tag:

<script src="//cdn.8thwall.com/web/iframe/iframe.js"></script>

When starting AR, register the XRIFrame by iframe ID:

window.XRIFrame.registerXRIFrame(IFRAME_ID)

When stoppping AR, deregister the XRIFrame:

window.XRIFrame.deregisterXRIFrame()

For the INNER website

iframe-inner.js must be included in the HEAD of your INNER AR website with this script tag:

<script src="//cdn.8thwall.com/web/iframe/iframe-inner.js"></script>

By allowing the inner and outer windows to communicate, deviceorientation/devicemotion data can be shared.

See sample project at https://www.8thwall.com/8thwall/inline-ar

Examples

Outer Page
<!-- Send deviceorientation/devicemotion to the INNER iframe -->
<script src="//cdn.8thwall.com/web/iframe/iframe.js"></script>

...
const IFRAME_ID = 'my-iframe' // Iframe containing AR content.
const onLoad = () => {
  window.XRIFrame.registerXRIFrame(IFRAME_ID)
}
// Add event listenters and callbacks for the body DOM.
window.addEventListener('load', onLoad, false)

...

<body>
  <iframe
    id="my-iframe"
    style="border: 0; width: 100%; height: 100%"
    allow="camera;microphone;gyroscope;accelerometer;"
    src="https://www.other-domain.com/my-web-ar/">
  </iframe>
</body>
Inner Page: AFrame projects
<head>
  <!-- Recieve deviceorientation/devicemotion from the OUTER window -->
  <script src="//cdn.8thwall.com/web/iframe/iframe-inner.js"></script>
</head>

...

<body>
  <!-- For A-FRAME -->
  <!-- NOTE: The iframe-inner script must load after A-FRAME, and iframe-inner component must appear before xrweb. -->
  <a-scene iframe-inner xrweb>
    ...
  </a-scene>
Inner Page: Non-AFrame projects
<head>
  <!-- Recieve deviceorientation/devicemotion from the OUTER window -->
  <script src="//cdn.8thwall.com/web/iframe/iframe-inner.js"></script>
</head>

...

<!-- For non-AFrame projects, add iframeInnerPipelineModule to the custom pipeline module section,
typically located in "onxrloaded" like so: -->
XR8.addCameraPipelineModules([
  // Custom pipeline modules
  iframeInnerPipelineModule,
])

Progressive Web Apps

Progressive Web Apps (PWAs) use modern web capabilities to offer users an experience that's similar to a native application. The 8th Wall Cloud Editor allows you to create a PWA version of your project so that users can add it to their home screen. Users must be connected to the internet in order to access it.

NOTE: Progressive Web Apps are only availalbe to accounts on Agency and Business plans.

To enable PWA support for your WebAR project:

  1. Visit your project settings page, and expand the “Progressive Web App” pane. (Only visible to Agency/Business users)
  2. Toggle the slider to Enable PWA support.
  3. Customize your PWA name, icon, and colors.
  4. Click "Save"

project-settings-pwa

Note: For Cloud Editor projects, you may be prompted to build & re-publish your project if it was previously published. If you decide not to republish, PWA support will be included the next time your project is built.

PWA API Reference

8th Wall's XRExtras library provides an API to automatically display an install prompt in your web app.

Please refer to the PwaInstaller API reference at https://github.com/8thwall/web/tree/master/xrextras/src/pwainstallermodule

PWA Icon Requirements

  • File Types: .png
  • Aspect Ratio: 1:1
  • Dimensions:

    • Minimum: 512 x 512 pixels

      • Note: If you upload an image larger than 512x512, it will be cropped to a 1:1 aspect ratio and resized down to 512x512.

PWA Install Prompt Customization

The PwaInstaller module from XRExtras displays an install prompt asking your user to add your web app to their home screen.

To customize the look of your install prompt, you can provide custom string values through the XRExtras.PwaInstaller.configure() API.

For a completely custom install prompt, configure the installer with displayInstallPrompt and hideInstallPrompt methods.

Self-Hosted PWA Usage

For Self-Hosted apps, we aren’t able to automatically inject details of the PWA into the HTML, requiring use of the configure API with the name and icon they’d like to appear in the install prompt.

Add the following <meta> tags to the <head> of your html:

<meta name="8thwall:pwa_name" content="My PWA Name">

<meta name="8thwall:pwa_icon" content="//cdn.mydomain.com/my_icon.png">

PWA Code Examples

Basic Example (AFrame)

<a-scene
  xrextras-almost-there
  xrextras-loading
  xrextras-runtime-error
  xrextras-pwa-installer
  xrweb>

Basic Example (Non-AFrame)

XR8.addCameraPipelineModules([
  XR8.GlTextureRenderer.pipelineModule(),
  XR8.Threejs.pipelineModule(),
  XR8.XrController.pipelineModule(),
  XRExtras.AlmostThere.pipelineModule(),
  XRExtras.FullWindowCanvas.pipelineModule(),
  XRExtras.Loading.pipelineModule(),
  XRExtras.RuntimeError.pipelineModule(),

  XRExtras.PwaInstaller.pipelineModule(), // Added here

  // Custom pipeline modules.
  myCustomPipelineModule(),
])

Customized Look Example (AFrame)

<a-scene
  xrextras-gesture-detector
  xrextras-almost-there
  xrextras-loading
  xrextras-runtime-error
  xrextras-pwa-installer="name: My Cool PWA;
    iconSrc: '//cdn.8thwall.com/my_custom_icon';
    installTitle: 'My CustomTitle';
    installSubtitle: 'My Custom Subtitle';
    installButtonText: 'Custom Install';
    iosInstallText: 'Custom iOS Install'"
  xrweb>

Customized Look Example (Non-AFrame)

XRExtras.PwaInstaller.configure({
  displayConfig: {
    name: 'My Custom PWA Name',
    iconSrc: '//cdn.8thwall.com/my_custom_icon',
    installTitle: ' My Custom Title',
    installSubtitle: 'My Custom Subtitle',
    installButtonText: 'Custom Install',
    iosInstallText: 'Custom iOS Install',
  }
})

Customized Display Time Example (AFrame)

<a-scene
  xrweb="disableWorldTracking: true"
  xrextras-gesture-detector
  xrextras-almost-there
  xrextras-loading
  xrextras-runtime-error
  xrextras-pwa-installer="minNumVisits: 5;
    displayAfterDismissalMillis: 86400000;"
>

Customized Display Time Example (Non-AFrame)

XRExtras.PwaInstaller.configure({
  promptConfig: {
    minNumVisits: 5, // Users must visit web app 5 times before prompt
    displayAfterDismissalMillis: 86400000 // One day
  }
})

Converting Models to GLB format

If you are using 8th Wall Web with A-Frame, three.js or Babylon.js, we recommend using 3D models in GLB (glTF 2.0 binary) format in your Web AR experiences. We believe GLB is currently the best format for Web AR with its small file size, great performance and versatile feature support (PBR, animations, etc).

For more information about 3d model best practices and links to a number of GLB converters, please visit:

https://www.8thwall.com/glb

Device Authorization

If you are on an Agency or Business plan, you gain the ability to self-host WebAR experiences. If you are self-hosting on a webserver that hasn't been whitelisted (see Connected Domains section of the documentation), you will need to authorize your device in order to view.

Authorizing a device installs a Developer Token (cookie) into its web browser, allowing it to view any app key within the current workspace.

There is no limit to the number of devices that can be authorized, but each device needs to be authorized individually. Views of your web application from an authorized device count toward your monthly usage total.

IMPORTANT: If you have followed the steps below on an iOS device, and are still having issues, please see the Troubleshooting section for steps to fix. Safari has a feature called Intelligent Tracking Prevention that can block third party cookies (what we use to authorize your device while you're developing). When they get blocked, we can't verify your device.

How to authorize a device:

  1. Login to 8thwall.com and select a Project.

  2. Click Device Authorization to expand the device authorization pane.

  3. Select 8th Wall Engine version to use during development. To use the latest stable version of 8th Wall, select release. To test against a pre-release version, select beta.

ConsoleDeveloperModeChannel

  1. Authorize your device:

From Desktop: If you are logged into the console on your laptop/desktop, Scan the QR code from the device you wish to authorize. This installs an authorization cookie on the device.

Note: A QR code can only be scanned once. After scanning, you will receive confirmation that your device has been authorized. The console will then generate a new QR code that can be scanned to authorize another device.

Before:

ConsoleDevTokenQR

After:

Confirmation (Console) Confirmation (On Device)
ConsoleQRConfirmation MobileQRConfirmation

From Mobile: If you are logged into 8thwall.com directly on the mobile device you wish to authorize, simply click Authorize browser. Doing so installs an authorization cookie into your mobile browser, authorizing it to view any project within the current workspace.

Before:

DeveloperModeMobile

After:

DeveloperModeMobileAuthorized

Local Hosting

If you are on a paid Agency or Business plan, you gain the ability to host WebAR projects on your own web servers.

Serving web app locally from your computer can be tricky as browsers require HTTPS certificates to access the camera on your phone through a browser. As a convenience, 8th Wall has created a public GitHub repo (https://github.com/8thwall/web) where you can find a "serve" script that will run a local https webserver on your development computer. You can also download sample 8th Wall Web projects to help you get started with self-hosted configurations.

Locally From Mac

  1. Install Node.js and npm

If you don't already have Node.js and npm installed, get it here: https://www.npmjs.com/get-npm

  1. Open a terminal window (Terminal.app, iTerm2, etc):
# cd <directory_where_you_saved_sample_project_files>
# cd serve
# npm install
# cd ..
# ./serve/bin/serve -d <sample_project_location>

Example:

./serve/bin/serve -n -d gettingstarted/xraframe/ -p 7777

ServeLocally

IMPORTANT: To connect to this local webserver, make sure to copy the entire "Listening" URL into your browser, including both the "https://" at the beginning and port number at the end.

NOTE: If the serve script states it's listening on 127.0.0.1:<port> (which is the loopback device aka "localhost") your mobile phone won't be able to connect to that IP address directly. Please re-run the serve script with the -i flag to specify the network interface the serve script should listen on.

Example - specify network interface:

./serve/bin/serve -d gettingstarted/xraframe/ -p 7777 -i en0

If you have issues connecting to the local webserver running on your computer, please refer to the troubleshooting section

Locally From Windows

Serving web app locally from your computer can be tricky as browsers require HTTPS certificates to access the camera on your phone through a browser. As a convenience, 8th Wall has created a public GitHub repo (https://github.com/8thwall/web) where you can find a "serve" script that will run a local https webserver on your development computer. You can also download sample 8th Wall Web projects to help you get started.

  1. Install Node.js and npm

If you don't already have Node.js and npm installed, get it here: https://www.npmjs.com/get-npm

  1. Open a Command Prompt (cmd.exe)

Note: Run the following command using a standard Command Prompt window (cmd.exe). The script will generate errors if run from PowerShell.

# cd <directory_where_you_saved_sample_project_files>
# cd serve
# npm install
# cd ..
# serve\bin\serve.bat -d <sample_project_location>

Example:

serve\bin\serve.bat -n -d gettingstarted\xraframe -p 7777

ServeLocallyWindows

IMPORTANT: To connect to this local webserver, make sure to copy the entire "Listening" URL into your browser, including both the "https://" at the beginning and port number at the end.

NOTE: If the serve script states it's listening on 127.0.0.1:<port> (which is the loopback device aka "localhost") your mobile phone won't be able to connect to that IP address directly. Please re-run the serve script with the -i flag to specify the network interface the serve script should listen on.

Example - specify network interface:

serve\bin\serve.bat -d gettingstarted\xraframe -p 7777 -i WiFi

If you have issues connecting to the local webserver running on your computer, please refer to the troubleshooting section

View Project on iOS Safari

  1. The “serve” command run in the previous step will display the IP and Port to connect to
  2. Open Safari on iOS 11+, and connect to the “Listening” URL. Note: Safari will complain about the SSL certificates, but you can safely proceed.

IMPORTANT: Make sure to copy the entire "Listening" URL into your browser, including both the "https://" at the beginning and port number at the end.

Example: https://192.168.1.50:8080

  1. Click "visit this website": iOSConnect1
  2. Click "Show Details": iOSConnect2
  3. Click "Visit Website": iOSConnect3
  4. Finally, click "Allow" to grant camera permissions and start viewing the sample AR experience: iOSConnect4

View Project on Android

  1. The “serve” command run in the previous step will display the IP and Port to connect to
  2. Open Chrome, a Chrome-variant (e.g. Samsung browser) or Firefox

IMPORTANT: Make sure to copy the entire "Listening" URL into your browser, including both the "https://" at the beginning and port number at the end.

Example: https://192.168.1.50:8080

  1. Chrome Example: The browser will complain that the cert is invalid, simply click "Advanced" to proceed: AndroidConnect1
  2. Click "PROCEED TO ... (UNSAFE)": AndroidConnect2

Changelog

Release 15: (2020-October-09, v15.0.9.487 / 2020-September-22, v15.0.8.487)

  • New Features:

    • 8th Wall Curved Image Targets:

      • Added support for cylindrical image targets such as those wrapped around bottles, cans and more.
      • Added support for conical image targets such as those wrapped around coffee cups, party hats, lampshades and more.
  • Fixes and Enhancements:

    • Improved tracking quality for SLAM and Image Targets.
    • Fixed an issue with MRCS Holograms and device routing on iOS 14.
    • Fixed an issue with Face Effects and Image Targets where updates to mirroredDisplay were not reflected during runtime.
    • Improved experience for some Android devices with multiple cameras. (v15.0.9.487)
    • Fixed a raycasting issue with AFrame 1.0.x (v15.0.9.487)
  • XRExtras Enhancements:

    • New AFrame components for easy Curved Image Target development:

      • 3D container prefab component that forms a portal-like container that 3D content can be placed inside.
      • Video playback prefab component for easily enabling video on curved image targets.
    • Improved detection of Web Share API Level 2 support.

Release 14.2: (2020-July-30, v14.2.4.949)

  • New Features:

    • Updated MediaRecorder.configure() to provide more control over audio output and mixing:

      • Pass in your own audioContext.
      • Request mic permissions during setup or runtime.
      • Optionally disable microphone recording.
      • Add your own audio nodes to the audio graph.
      • Incorporate scene audio into recording playback.
  • Fixes and Enhancements:

    • Fixed an issue where clip planes were not set from PlayCanvas in some cases.
    • Added support for switching between world tracking, image target tracking, and face effects at runtime.
    • Fixed an issue where vertex buffers could be rebound after vertex arrays were deleted.
    • Improved experience for some Android devices with multiple cameras.

Release 14.1: (2020-July-06, v14.1.4.949)

  • New Features:

    • Introducing 8th Wall Video Recording:

      • Add in-browser video recording to any 8th Wall project with the new XR8.MediaRecorder API.
      • Add dynamic overlays and end cards with custom images and call to action.
      • Configure maximum video duration and resolution.
    • Added microphone as a configurable module permission.
  • Fixes and Enhancements:

    • Enhanced CanvasScreenshot functionality with improved overlay support.
    • Fixed an issue with Face Effects that could cause visual glitches on device orientation change.
    • Improved Face Effects right-handed coordinate compatibility with Bablyon.js.
    • Improved graphics pipeline compatibility with Babylon.js.
  • XRExtras Enhancements:

    • Record button prefab component for capturing video and photos:

      • Select between standard, fixed, and photo capture modes.
    • Preview prefab component for easily previewing, downloading, and sharing captures
    • Use XRExtras to easily customize the Video Recording user experience in your project:

      • Configure maximum video length and resolution.
      • Add optional watermark to each frame of the video.
      • Add optional end card to add branding and a call to action at the end of the video.

Release 14: (2020-May-26)

  • New Features:

    • Introducing 8th Wall Face Effects: Attach 3D objects to face attachment points and paint your face with custom materials, shaders or videos.
    • Selfie Mode: Use the front camera with a mirrored display to get the perfect selfie shot.
    • Desktop Browsers: Enable your image target and face effect experiences to run in desktop browsers utilizing the webcam.
  • Fixes and Enhancements:

    • Enhanced capture field of view on Pixel 4/4XL phones.
    • Enhanced camera profiles for certain phone models.
    • Fixed an issue with changing device orientation during startup.
    • Fixed an issue with swapping the camera direction on the same a-scene.
    • Fixed an issue with AFrame look-controls not being removed on scene restart.
    • Improved experience for some Android phones with multiple cameras.
    • Other fixes and enhancements.
  • XRExtras Enhancements:

    • Enhanced almost there flows for experiences that can be viewed on desktop.
    • PauseOnBlur module stops the camera when your tab is not active.
    • New AFrame components for easy face effects and desktop experience development.
    • New ThreeExtras for rendering PBR materials, basic materials, and videos to faces.

Release 13.2: (2020-Feb-13)

  • New Features:

    • Release camera stream on XR8.pause() and reopen on XR8.resume().
    • Added API to access shader program and modify uniforms used by GlTextureRenderer.
    • Added API to configure WebGL context on run.
  • Fixes and Enhancements:

    • Fix black video issue on iOS when a user long-presses on an image.
    • Improved iOS screenshot capture speed and reliability.
    • Fixed alpha channel rendering when taking a screenshot.
    • Improved experience for some Android phones with multiple cameras.
    • Improved detection of social network web views.
  • XRExtras Enhancements:

    • Improved QR codes with better compatibility with native cameras.
    • Improved link out flows for social networks.
    • Improved CSS customization.

Release 13.1:

  • New Features:

    • Improved framerate on high resolution Android phones.
    • Camera pipeline can be stopped and restarted.
    • Camera pipeline modules can be removed at runtime or when stopped.
    • New lifecycle callbacks for attaching and detaching.
  • Fixes and Enhancements:

    • Improved experience for some Android phones with multiple cameras.
    • Fixed iOS phone calibration on iOS 12.2 and above.

Release 13:

  • New Features:

    • Adds support for cloud-based creation, collaboration, publishing, and hosting of WebAR content.

Release 12.1:

  • Fixes and Enhancements:

    • Increased camera resolution on newer iOS devices.
    • Increased AFrame fps on high-res Android devices.
    • Fixed three.js r103+ raycasting issues.
    • Added support for iPadOS.
    • Fixed memory issue when loading many image targets repeatedly.
    • Minor performance enhances and bug fixes.

Release 12:

  • New Features:

    • Increased image target upload limit to 1000 image targets per app.
    • New API for selecting active image targets at runtime.
    • Apps can now scan for up to 10 image targets simultaneously.
    • Front facing camera is supported in camera framework and image targets.
    • Engine support for PlayCanvas.
  • Fixes:

    • Improved experience for some Android phones with multiple cameras.
  • XRExtras:

    • Improved visual quality on Android Phones.
    • Support for iOS 13 device orientation permissions.
    • Better error handling for missing web assembly on some older versions of iOS.
    • Support for PlayCanvas.

Release 11.2:

  • New Features:

    • iOS 13 motion support.

Release 11.1:

  • Fixes and Enhancements:

    • Reduced memory usage.
    • Improved tracking performance.
    • Enhanced detection of browser capabilities.

Release 11:

  • New Features:

    • Added support for Image Targets.
    • Added support for BabylonJS.
    • Reduced JS binary size to 1MB.
    • Added support running 8th Wall Web inside a cross-origin iframe.
    • Minor API additions.

Release 10.1:

  • New Features:

    • Added support for A-Frame 0.9.0.
  • Fixes:

    • Fixed error when providing includedTypes to XrController.hitTest().
    • Reduced memory usage when tracking over extended distances.

Release 10:

Release 10 adds a revamped web developer console with streamlined developer-mode, access to allowed origins and QR codes. It adds 8th Wall Web support for XRExtras, an open-source package for error handling, loading visualizations, "almost there" flows, and more.

  • New Features:

    • Revamped web developer console.
    • XR Extras provides a convenient solution for:

      • Load screens and requesting camera permissions.
      • Redirecting users from unsupported devices or browsers ("almost there").
      • Runtime error handling.
      • Drawing a full screen camera feed in low-level frameworks like threejs.
    • Added public lighting, hit test interfaces to XrController.
    • Other minor API additions.
  • Fixes:

    • Improved app startup speed.
    • Fixed a framework issue where errors were not propagated on startup.
    • Fixed an issue that could occur with WebGL during initialization.
    • Use window.screen interface for device orientation if available.
    • Fixed a threejs issue that could occur when the canvas is resized.

Release 9.3:

  • New Features:

    • Minor API additions: XR.addCameraPipelineModules() and XR.FullWindowCanvas.pipelineModule()

Release 9.2:

Release 9.1:

  • New Features:

    • Added support for Amazon Sumerian in 8th Wall Web
    • Improved tracking stability and eliminated jitter

Release 9:

  • Initial release of 8th Wall Web!

Device Not Authorized

Issue: When trying to view my Web App, I receive a "Device Not Authorized" error message.

Safari specific:

The situation:

  • While viewing your project, you see 'Device not Authorized' alerts, but
  • apps.8thwall.com/token shows the correct authorization.

Why does this happen?

Safari has a feature called Intelligent Tracking Prevention that can block third party cookies (what we use to authorize your device while you're developing). When they get blocked, we can't verify your device.

Steps to fix:

  1. Close Safari
  2. Turn off Intelligent Tracking Prevention at Settings>Safari>Prevent Cross-Site Tracking
  3. Clear 8th Wall cookies at Settings>Safari>Advanced>Website Data>8thwall.com
  4. Reauthorize from console
  5. Check your project
  6. If not fixed: Clear all cookies at Settings>Safari>Clear History and Website Data
  7. Reauthorize from console

Otherwise

See Invalid App Key steps from #5 onwards for more troubleshooting.

Invalid App Key

Issue: When trying to view my Web App, I receive an "Invalid App Key" or "Domain Not Authorized" error message.

Troubleshooting steps:

  1. Verify your app key was pasted properly into source code.
  2. Verify you are connecting to your web app via https. This is required by mobile browsers for camera access.
  3. Verify you are using a supported browser, see Web Browser Requirements
  4. Verify your device has been properly authorized. On your phone, visit https://apps.8thwall.com/token to view device authorization status.
  5. If you are a member of multiple Web Developer workspaces, make sure that the App Key and Dev Token are from the same workspace.
  6. If your web browser is in Private Browsing or Incognito mode, please exit Private/Incognito mode, re-authorize your device, and try again.
  7. Clear website data & cookies from your web browser, re-authorize your device, and try again.
  8. If you are on a paid plan and are trying to access your WebAR experience publicly, make sure that Connected Domains are configured properly.

6DoF Camera Motion Not Working

Issue: As I move my phone, the camera position does not update.

Resolution: Check the position of the camera in your scene. The camera should NOT be at a height (Y) of zero. Set it to Non-Zero value. The Y position of the camera at start effectively determines the scale of virtual content on a surface (e.g. smaller y, bigger content)

Object Not Tracking Surface Properly

Issue: Content in my scene doesn't appear to be "sticking" to a surface properly

Resolution:

To place an object on a surface, the base of the object needs to be at a height of Y=0

Note: Setting the position at a height of Y=0 isn't necesarily sufficient.

For example, if the transform your model is at the center of the object, placing it at Y=0 will result in part of the object living below the surface. In this case you'll need to adjust the vertical position of the object so that the bottom of the object sits at Y=0.

It's often helpful to visualize object positioning relative to the surface by placing a semi-transparent plane at Y=0.

A-Frame example:

<a-plane position="0 0 0" rotation="-90 0 0" width="4" height="4" material="side: double; color: #FFFF00; transparent: true; opacity: 0.5" shadow></a-plane>

Three.js example:

  // Create a 1x1 Plane with a transparent yellow material
  var geometry = new THREE.PlaneGeometry( 1, 1, 1, 1 );   // THREE.PlaneGeometry (width, height, widthSegments, heightSegments)
  var material = new THREE.MeshBasicMaterial( {color: 0xffff00, transparent:true, opacity:0.5, side: THREE.DoubleSide} );
  var plane = new THREE.Mesh( geometry, material );
  // Rotate 90 degrees (in radians) along X so plane is parallel to ground 
  plane.rotateX(1.5708)
  plane.position.set(0, 0, 0)
  scene.add( plane );

Can't connect to "serve" script

Issue:

I'm using the "serve" script (from 8th Wall Web's public GitHub repo: https://github.com/8thwall/web) to run a local webserver on my laptop and it says it's listening on 127.0.0.1. My phone is unable to connect to the laptop using that IP address.

ServeLocalhost

"127.0.0.1" is the loopback address of your laptop (aka "localhost"), so other devices such as your phone won't be able to connect directly to that IP address. For some reason, the serve script has decided to listen on the loopback interface.

Resolution:

Please re-run the serve script with the -i flag and specify the network interface you wish to use.

Example (Mac):

./serve/bin/serve -d gettingstarted/xraframe/ -p 7777 -i en0

Example (Windows):

Note: Run the following command using a standard Command Prompt window (cmd.exe). The script will generate errors if run from PowerShell.

serve\bin\serve.bat -d gettingstarted\xraframe -p 7777 -i WiFi

If you are still unable to connect, please check the following:

  • Make sure that your computer and mobile device are both connected to the same WiFi network.
  • Disable the local firewall running on your computer.
  • To connect, either scan the QR code or make sure to copy the entire "Listening" URL into your browser, including both the "https://" at the beginning and port number at the end.

Help & Support

Need some help? 8th Wall is here to help you succeed. Contact us directly, or reach out to the community to get answers.

Ways to get help:

Slack Email Support Stack Overflow GitHub
Join our public Slack channel to discuss and ask questions with members of the 8th Wall community Email support@8thwall.com to get help directly from our support team Get help and discuss solutions with other 8th Wall Web users by using the 8thwall-web tag Download sample code and view step-by-step setup guides on our GitHub repo


























[1] Intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for 8th Wall’s products remain at the sole discretion of 8th Wall, Inc.

API Overview

This section of the documentation contains details of 8th Wall Web's Javascript API.

XR8

Description

Entry point for 8th Wall's Javascript API

Functions

Function Description
addCameraPipelineModule Adds a module to the camera pipeline that will receive event callbacks for each stage in the camera pipeline.
addCameraPipelineModules Add multiple camera pipeline modules. This is a convenience method that calls addCameraPipelineModule in order on each element of the input array.
clearCameraPipelineModules Remove all camera pipeline modules from the camera loop.
isPaused Indicates whether or not the XR session is paused.
pause Pause the current XR session. While paused, the camera feed is stopped and device motion is not tracked.
resume Resume the current XR session.
removeCameraPipelineModule Removes a module from the camera pipeline.
removeCameraPipelineModules Remove multiple camera pipeline modules. This is a convenience method that calls removeCameraPipelineModule in order on each element of the input array.
requiredPermissions Return a list of permissions required by the application.
run Open the camera and start running the camera run loop.
runPreRender Executes all lifecycle updates that should happen before rendering.
runPostRender Executes all lifecycle updates that should happen after rendering.
stop Stop the current XR session. While stopped, the camera feed is closed and device motion is not tracked.
version Get the 8th Wall Web engine version.

Events

Event Emitted Description
xrloaded This event is emitted once XR8 has loaded.

Modules

Module Description
AFrame Entry point for A-Frame integration with 8th Wall Web.
Babylonjs Entry point for Babylon.js integration with 8th Wall Web.
CameraPixelArray Provides a camera pipeline module that gives access to camera data as a grayscale or color uint8 array.
CanvasScreenshot Provides a camera pipeline module that can generate screenshots of the current scene.
FaceController Provides face detection and meshing, and interfaces for configuring tracking.
GlTextureRenderer Provides a camera pipeline module that draws the camera feed to a canvas as well as extra utilities for GL drawing operations.
MediaRecorder Provides a camera pipeline module that allows you to record a video in MP4 format.
PlayCanvas Entry point for PlayCanvas integration with 8th Wall Web.
Sumerian Entry point for Sumerian integration with 8th Wall Web.
Threejs Provides a camera pipeline module that drives three.js camera to do virtual overlays.
XrConfig Specifying class of devices and cameras that pipeline modules should run on.
XrController XrController provides 6DoF camera tracking and interfaces for configuring tracking.
XrDevice Provides information about device compatibility and characteristics.
XrPermissions Utilities for specifying permissions required by a pipeline module.

XR8.addCameraPipelineModule()

XR8.addCameraPipelineModule()

Description

8th Wall camera applications are built using a camera pipeline module framework. For a full description on camera pipeline modules, see CameraPipelineModule.

Applications install modules which then control the behavior of the application at runtime. A module object must have a .name string which is unique within the application, and then should provide one or more of the camera lifecycle methods which will be executed at the appropriate point in the run loop.

During the main runtime of an application, each camera frame goes through the following cycle:

onBeforeRun -> onCameraStatusChange (requesting -> hasStream -> hasVideo | failed) -> onStart -> onAttach -> onProcessGpu -> onProcessCpu -> onUpdate -> onRender

Camera modules should implement one or more of the following camera lifecycle methods:

Function Description
onAppResourcesLoaded Called when we have received the resources attached to an app from the server.
onAttach Called before the first time a module receives frame updates. It is called on modules that were added either before or after the pipeline is running.
onBeforeRun Called immediately after XR8.run(). If any promises are returned, XR will wait on all promises before continuing.
onCameraStatusChange Called when a change occurs during the camera permissions request.
onCanvasSizeChange Called when the canvas changes size.
onDetach Called after the last time a module receives frame updates. This is either after stop is called, or after the module is manually removed from the pipeline.
onDeviceOrientationChange Called when the device changes landscape/portrait orientation.
onException Called when an error occurs in XR. Called with the error object.
onPaused Called when XR8.pause() is called.
onProcessCpu Called to read results of GPU processing and return usable data.
onProcessGpu Called to start GPU processing.
onRender Called after onUpdate. This is the time for the rendering engine to issue any WebGL drawing commands. If an application is providing its own run loop and is relying on XR8.runPreRender() and XR8.runPostRender(), this method is not called and all rendering must be coordinated by the external run loop.
onResume Called when XR8.resume() is called.
onStart Called when XR starts. First callback after XR8.run() is called.
onUpdate Called to update the scene before render. Data returned by modules in onProcessGpu and onProcessCpu will be present as processGpu.modulename and processCpu.modulename where the name is given by module.name = "modulename".
onVideoSizeChange Called when the canvas changes size.
requiredPermissions Modules can indicate what browser capabilities they require that may need permissions requests. These can be used by the framework to request appropriate permissions if absent, or to create components that request the appropriate permissions before running XR.

Note: Camera modules that implement onProcessGpu or onProcessCpu can provide data to subsequent stages of the pipeline. This is done by the module's name.

Example1 - A camera pipeline module for managing camera permissions:

XR8.addCameraPipelineModule({
  name = 'camerastartupmodule',
  onCameraStatusChange = ({status}) {
    if (status == 'requesting') {
      myApplication.showCameraPermissionsPrompt()
    } else if (status == 'hasStream') {
      myApplication.dismissCameraPermissionsPrompt()
    } else if (status == 'hasVideo') {
      myApplication.startMainApplictation()
    } else if (status == 'failed') {
      myApplication.promptUserToChangeBrowserSettings()
    }
  },
})

Example2 - a QR code scanning application could be built like this:

// Install a module which gets the camera feed as a UInt8Array.
XR8.addCameraPipelineModule(
  XR8.CameraPixelArray.pipelineModule({luminance: true, width: 240, height: 320}))

// Install a module that draws the camera feed to the canvas.
XR8.addCameraPipelineModule(XR8.GlTextureRenderer.pipelineModule())

// Create our custom application logic for scanning and displaying QR codes.
XR8.addCameraPipelineModule({
  name = 'qrscan',
  onProcessCpu = ({onProcessGpuResult}) => {
    // CameraPixelArray.pipelineModule() returned these in onProcessGpu.
    const { pixels, rows, cols, rowBytes } = onProcesGpuResult.camerapixelarray
    const { wasFound, url, corners } = findQrCode(pixels, rows, cols, rowBytes)
    return { wasFound, url, corners }
  },
  onUpdate = ({onProcessCpuResult}) => {
    // These were returned by this module ('qrscan') in onProcessCpu
    const {wasFound, url, corners } = onProcessCpuResult.qrscan
    if (wasFound) {
      showUrlAndCorners(url, corners)
    }
  },
})

XR8.addCameraPipelineModules()

XR8.addCameraPipelineModules([ modules ])

Description

Add multiple camera pipeline modules. This is a convenience method that calls addCameraPipelineModule in order on each element of the input array.

Parameters

Parameter Description
modules An array of camera pipeline modules.

Example

const onxrloaded = () => {
  XR8.addCameraPipelineModules([  // Add camera pipeline modules.
    // Existing pipeline modules.
    XR8.GlTextureRenderer.pipelineModule(),  // Draws the camera feed.
  ])

  // Request camera permissions and run the camera.
  XR8.run({canvas: document.getElementById('camerafeed')})
}

// Wait until the XR javascript has loaded before making XR calls.
window.onload = () => {window.XR ? onxrloaded() : window.addEventListener('xrloaded', onxrloaded)}

XR8.clearCameraPipelineModules()

XR8.clearCameraPipelineModules()

Description

Remove all camera pipeline modules from the camera loop.

Parameters

None

Example

XR8.clearCameraPipelineModules()

XR8.isPaused()

bool XR8.isPaused()

Parameters

None

Description

Indicates whether or not the XR session is paused.

Example

// Call XR8.pause() / XR8.resume() when the button is pressed.
document.getElementById('pause').addEventListener(
  'click',
  () => {
    if (!XR8.isPaused()) {
      XR8.pause()
    } else {
      XR8.resume()
    }
  },
  true)

XR8.pause()

XR8.pause()

Parameters

None

Description

Pause the current XR session. While paused, device motion is not tracked.

Example

// Call XR8.pause() / XR8.resume() when the button is pressed.
document.getElementById('pause').addEventListener(
  'click',
  () => {
    if (!XR8.isPaused()) {
      XR8.pause()
    } else {
      XR8.resume()
    }
  },
  true)

XR8.removeCameraPipelineModule()

XR8.removeCameraPipelineModule(moduleName)

Description

Removes a module from the camera pipeline.

Parameters

Parameter Description
moduleName The string name string of a module.

Example

XR8.removeCameraPipelineModule('reality')

XR8.removeCameraPipelineModules()

XR8.removeCameraPipelineModules([ moduleNames ])

Description

Remove multiple camera pipeline modules. This is a convenience method that calls removeCameraPipelineModule in order on each element of the input array.

Parameters

Parameter Description
moduleNames An array of objects with a name property, or a name strings of modules.

Example

XR8.removeCameraPipelineModules(['threejsrenderer', 'reality'])

XR8.requiredPermissions()

XR8.requiredPermissions()

Parameters

None

Description

Return a list of permissions required by the application.

Example

if (XR8.XrPermissions) {
  const permissions = XR8.XrPermissions.permissions()
  const requiredPermissions = XR8.requiredPermissions()
  if (!requiredPermissions.has(permissions.DEVICE_ORIENTATION)) {
    return
  }
}

XR8.resume()

XR8.resume()

Parameters

None

Description

Resume the current XR session after it has been paused.

Example

// Call XR8.pause() / XR8.resume() when the button is pressed.
document.getElementById('pause').addEventListener(
  'click',
  () => {
    if (!XR8.isPaused()) {
      XR8.pause()
    } else {
      XR8.resume()
    }
  },
  true)

XR8.run()

XR8.run(canvas, webgl2: true, ownRunLoop: true)

Parameters

Property Type Default Description
canvas HTMLCanvasElement The HTML Canvas that the camera feed will be drawn to.
webgl2 [Optional] bool true If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1.
ownRunLoop [Optional] bool true If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only]
cameraConfig: {direction} [Optional] object {direction: XR8.XrConfig.camera().BACK} Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT
glContextConfig [Optional] WebGLContextAttributes null The attributes to configure the WebGL canvas context.
allowedDevices [Optional] XR8.XrConfig.device() XR8.XrConfig.device().MOBILE Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY, always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE.

Notes:

  • cameraConfig: World tracking (SLAM) is only supported on the back camera. If you are using the front camera, you must disable world tracking by calling XR8.XrController.configure({disableWorldTracking: true}) first.

Description

Open the camera and start running the camera run loop.

Example

// Open the camera and start running the camera run loop
// In index.html: <canvas id="camerafeed"></canvas>
XR8.run({canvas: document.getElementById('camerafeed')})

Example - Using Front camera (image tracking only)

// Disable world tracking (SLAM). This is required to use the front camera.
XR8.XrController.configure({disableWorldTracking: true})
// Open the camera and start running the camera run loop
// In index.html: <canvas id="camerafeed"></canvas>
XR8.run({canvas: document.getElementById('camerafeed'), cameraConfig: {direction: XR8.XrConfig.camera().FRONT}})

Example - Set glContextConfig

// Open the camera and start running the camera run loop with an opaque canvas.
// In index.html: <canvas id="camerafeed"></canvas>
XR8.run({canvas: document.getElementById('camerafeed'), glContextConfig: {alpha: false, preserveDrawingBuffer: false}})

XR8.runPreRender()

XR8.runPreRender( timestamp )

Description

Executes all lifecycle updates that should happen before rendering.

IMPORTANT: Make sure that onStart has been called before calling runPreRender()/runPostRender().

Parameters

Parameter Description
timestamp The current time, in milliseconds.

Example

// Implement A-Frame components tick() method
function tick() {
  // Check device compatibility and run any necessary view geometry updates and draw the camera feed.
  ...
  // Run XR lifecycle methods
  XR8.runPreRender(Date.now())
  }

XR8.runPostRender()

XR8.runPostRender()

Description

Executes all lifecycle updates that should happen after rendering.

IMPORTANT: Make sure that onStart has been called before calling runPreRender()/runPostRender().

Parameters

None

Example

// Implement A-Frame components tock() method
  function tock() {
  // Check whether XR is initialized
  ...
  // Run XR lifecycle methods
  XR8.runPostRender()
}

XR8.stop()

XR8.stop()

Parameters

None

Description

While stopped, the camera feed is closed and device motion is not tracked. Must call XR8.run() to restart after the engine is stopped.

Example

XR8.stop()

XR8.version()

string XR8.version()

Parameters

None

Description

Get the 8th Wall Web engine version.

Example

console.log(XR8.version())

XR8.AFrame

A-Frame (https://aframe.io) is a web framework designed for building virtual reality experiences. By adding 8th Wall Web to your A-Frame project, you can now easily build augmented reality experiences for the web.

Adding 8th Wall Web to A-Frame

Cloud Editor

  1. Simply add a "meta" tag in head.html to include the "8-Frame library in your project. If you are cloning from any of 8th Wall's A-Frame based templates or self-hosted projects, it will already be there. Also, there is no need to manually add your AppKey.

<meta name="8thwall:renderer" content="aframe">

Self Hosted

8th Wall Web can be added to your A-Frame project in a few easy steps:

  1. Include a slightly modified version of A-Frame (referred to as "8-Frame") which fixes some polish concerns:

<script src="//cdn.8thwall.com/web/aframe/8frame-0.9.2.min.js"></script>

  1. Add the following script tag to the HEAD of your page. Replace the X's with your app key:

<script src="//apps.8thwall.com/xrweb?appKey=XXXXX"></script>

World Tracking and/or Image Targets

  1. If you want World Tracking or Image Target tracking, add an xrweb component to your a-scene tag:

<a-scene xrweb>

xrweb Attributes

Component Type Default Description
disableWorldTracking bool false If true, turn off SLAM tracking for efficiency.
cameraDirection string back Desired camera to use. Choose from: back or front. Use cameraDirection: front; with mirroredDisplay: true; for selfie mode. Note that world tracking is only supported with cameraDirection: back;.`
allowedDevices string "mobile" Supported device classes. Choose from: 'mobile' or 'any'. Use 'any' to enable laptop or desktop-type devices with built-in or attached webcams. Note that world tracking is only supported on mobile.
mirroredDisplay bool false If true, flip left and right in the output geometry and reversie the direction of the camera feed. Use 'mirroredDisplay: true;' with 'cameraDirection: front;' for selfie mode. Should not be enabled if World Tracking (SLAM) is enabled.

Notes:

  • cameraDirection: World tracking (SLAM) is only supported on the back camera. If you are using the front camera, you must disable world tracking by setting disableWorldTracking: true.
  • World tracking (SLAM) is only supported on mobile devices.
  • xrweb and xrface cannot be used at the same time.

Face Effects

  1. If you want Face Effects tracking, add an xrface component to your a-scene tag:

<a-scene xrface>

xrface Attributes

Component Type Default Description
cameraDirection string back Desired camera to use. Choose from: back or front. Use cameraDirection: front; with mirroredDisplay: true; for selfie mode.
allowedDevices string "mobile" Supported device classes. Choose from: 'mobile' or 'any'. Use 'any' to enable laptop or desktop-type devices with built-in or attached webcams.
mirroredDisplay bool false If true, flip left and right in the output geometry and reversie the direction of the camera feed. Use 'mirroredDisplay: true;' with 'cameraDirection: front;' for selfie mode.
meshGeometry array ['face'] Configure which portions of the face mesh will have returned triangle indices. Can be any combination of 'face', 'eyes' and/or 'mouth'.

Notes:

  • xrweb and xrface cannot be used at the same time.

Functions

Function Description
xrwebComponent Creates an A-Frame component for World Tracking and/or Image Target tracking which can be registered with AFRAME.registerComponent(). Generally won't need to be called directly.
xrfaceComponent Creates an A-Frame component for Face Effects tracking which can be registered with AFRAME.registerComponent(). Generally won't need to be called directly.

Example - SLAM enabled (default)

  <a-scene xrweb>

Example - SLAM disabled (image tracking only)

  <a-scene xrweb="disableWorldTracking: true">

Example - Front camera (image tracking only)

  <a-scene xrweb="disableWorldTracking: true; cameraDirection: front">

XR8.AFrame.xrwebComponent()

XR8.AFrame.xrwebComponent()

Parameters

None

Description

Creates an A-Frame component which can be registered with AFRAME.registerComponent(). This, however, generally won't need to be called directly. On 8th Wall Web script load, this component will be registered automatically if it is detected that A-Frame has loaded (i.e if window.AFRAME exists).

Example

window.AFRAME.registerComponent('xrweb', XR8.AFrame.xrwebComponent())

AFrame Events

This section describes the events emitted by the "xrweb" or "xrface" A-Frame component.

You can listen for these events in your web application to call a function that handles the event.

Events Emitted

The following events are emitted by both "xrweb" and "xrface":

Event Emitted Description
camerastatuschange This event is emitted when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status.
realityerror This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed.
realityready This event is emitted when 8th Wall Web has initialized and at least one frame has been successfully processed. This is the recommended time at which any loading elements should be hidden.
screenshoterror This event is emitted in response to the screenshotrequest resulting in an error.
screenshotready This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image of the AFrame canvas will be provided.

Events Emitted by xrweb

Event Emitted Description
xrimageloading This event is emitted when detection image loading begins.
xrimagescanning This event is emitted when all detection images have been loaded and scanning has begun.
xrimagefound This event is emitted when an image target is first found.
xrimageupdated This event is emitted when an image target changes position, rotation or scale.
xrimagelost This event is emitted when an image target is no longer being tracked.

Events Emitted by xrface

Event Emitted Description
xrfaceloading This event is emitted when when loading begins for additional face AR resources.
xrfacescanning This event is emitted when AR resources have been loaded and scanning has begun.
xrfacefound This event is emitted when a face is first found.
xrfacepdated This event is emitted when face is subsequently found.
xrfacelost This event is emitted when a face is no longer being tracked.

camerastatuschange

Description

This event is emitted when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status.

Example:

var handleCameraStatusChange = function handleCameraStatusChange(event) {
  console.log('status change', event.detail.status);

  switch (event.detail.status) {
    case 'requesting':
      // Do something
      break;

    case 'hasStream':
      // Do something
      break;

    case 'failed':
      event.target.emit('realityerror');
      break;
  }
};
let scene = this.el.sceneEl
scene.addEventListener('camerastatuschange', handleCameraStatusChange)

realityerror

Description

This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed.

Example:

let scene = this.el.sceneEl
  scene.addEventListener('realityerror', (event) => {
    if (XR8.XrDevice.isDeviceBrowserCompatible()) {
      // Browser is compatible. Print the exception for more information.
      console.log(event.detail.error)
      return
    }

    // Browser is not compatible. Check the reasons why it may not be.
    for (let reason of XR8.XrDevice.incompatibleReasons()) {
      // Handle each XR8.XrDevice.IncompatibilityReasons
    }
  })

realityready

Description

This event is emitted when 8th Wall Web has initialized.

Example:

let scene = this.el.sceneEl
scene.addEventListener('realityready', () => {
  // Hide loading UI
})

screenshoterror

Description

This event is emitted in response to the screenshotrequest resulting in an error.

Example:

let scene = this.el.sceneEl
scene.addEventListener('screenshoterror', (event) => {
  console.log(event.detail)
  // Handle screenshot error.
})

screenshotready

Description

This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image of the AFrame canvas will be provided.

Example:

let scene = this.el.sceneEl
scene.addEventListener('screenshotready', (event) => {
  // screenshotPreview is an <img> HTML element
  const image = document.getElementById('screenshotPreview')
  image.src = 'data:image/jpeg;base64,' + event.detail
})

xrimageloading

Description

This event is emitted by xrweb when detection image loading begins.

imageloading.detail : { imageTargets: {name, type, metadata} }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.
metadata User metadata.

Example:

const componentMap = {}

const addComponents = ({detail}) => {
  detail.imageTargets.forEach(({name, type, metadata}) => {
    // ...
  })
}

this.el.sceneEl.addEventListener('xrimageloading', addComponents)

xrimagescanning

Description

This event is emitted by xrweb when all detection images have been loaded and scanning has begun.

imagescanning.detail : { imageTargets: {name, type, metadata, geometry} }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.
metadata User metadata.
geometry Object containing geometry data. If type=FLAT: {scaledWidth, scaledHeight}, lse if type=CYLINDRICAL or type=CONICAL: {height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians}

If type = FLAT, geometry:

Property Description
scaledWidth The width of the image in the scene, when multiplied by scale.
scaledHeight The height of the image in the scene, when multiplied by scale.

If type= CYLINDRICAL or CONICAL, geometry:

Property Description
height Height of the curved target.
radiusTop Radius of the curved target at the top.
radiusBottom Radius of the curved target at the bottom.
arcStartRadians Starting angle in radians.
arcLengthRadians Central angle in radians.

xrimagefound

Description

This event is emitted by xrweb when an image target is first found.

imagefound.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.`
position: {x, y, z} The 3d position of the located image.
rotation: {w, x, y, z} The 3d local orientation of the located image.
scale A scale factor that should be applied to object attached to this image.

If type = FLAT:

Property Description
scaledWidth The width of the image in the scene, when multiplied by scale.
scaledHeight The height of the image in the scene, when multiplied by scale.

If type= CYLINDRICAL or CONICAL:

Property Description
height Height of the curved target.
radiusTop Radius of the curved target at the top.
radiusBottom Radius of the curved target at the bottom.
arcStartRadians Starting angle in radians.
arcLengthRadians Central angle in radians.

Example:

AFRAME.registerComponent('my-named-image-target', {
  schema: {
    name: { type: 'string' }
  },
  init: function () {
    const object3D = this.el.object3D
    const name = this.data.name
    object3D.visible = false

    const showImage = ({detail}) => {
      if (name != detail.name) {
        return
      }
      object3D.position.copy(detail.position)
      object3D.quaternion.copy(detail.rotation)
      object3D.scale.set(detail.scale, detail.scale, detail.scale)
      object3D.visible = true
    }

    const hideImage = ({detail}) => {
      if (name != detail.name) {
        return
      }
      object3D.visible = false
    }

    this.el.sceneEl.addEventListener('xrimagefound', showImage)
    this.el.sceneEl.addEventListener('xrimageupdated', showImage)
    this.el.sceneEl.addEventListener('xrimagelost', hideImage)
  }
})

xrimageupdated

Description

This event is emitted by xrweb when an image target changes position, rotation or scale.

imageupdated.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.`
position: {x, y, z} The 3d position of the located image.
rotation: {w, x, y, z} The 3d local orientation of the located image.
scale A scale factor that should be applied to object attached to this image.

If type = FLAT:

Property Description
scaledWidth The width of the image in the scene, when multiplied by scale.
scaledHeight The height of the image in the scene, when multiplied by scale.

If type= CYLINDRICAL or CONICAL:

Property Description
height Height of the curved target.
radiusTop Radius of the curved target at the top.
radiusBottom Radius of the curved target at the bottom.
arcStartRadians Starting angle in radians.
arcLengthRadians Central angle in radians.

Example:

AFRAME.registerComponent('my-named-image-target', {
  schema: {
    name: { type: 'string' }
  },
  init: function () {
    const object3D = this.el.object3D
    const name = this.data.name
    object3D.visible = false

    const showImage = ({detail}) => {
      if (name != detail.name) {
        return
      }
      object3D.position.copy(detail.position)
      object3D.quaternion.copy(detail.rotation)
      object3D.scale.set(detail.scale, detail.scale, detail.scale)
      object3D.visible = true
    }

    const hideImage = ({detail}) => {
      if (name != detail.name) {
        return
      }
      object3D.visible = false
    }

    this.el.sceneEl.addEventListener('xrimagefound', showImage)
    this.el.sceneEl.addEventListener('xrimageupdated', showImage)
    this.el.sceneEl.addEventListener('xrimagelost', hideImage)
  }
})

xrimagelost

Description

This event is emitted by xrweb when an image target is no longer being tracked.

imagelost.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.`
position: {x, y, z} The 3d position of the located image.
rotation: {w, x, y, z} The 3d local orientation of the located image.
scale A scale factor that should be applied to object attached to this image.

If type = FLAT:

Property Description
scaledWidth The width of the image in the scene, when multiplied by scale.
scaledHeight The height of the image in the scene, when multiplied by scale.

If type= CYLINDRICAL or CONICAL:

Property Description
height Height of the curved target.
radiusTop Radius of the curved target at the top.
radiusBottom Radius of the curved target at the bottom.
arcStartRadians Starting angle in radians.
arcLengthRadians Central angle in radians.

Example:

AFRAME.registerComponent('my-named-image-target', {
  schema: {
    name: { type: 'string' }
  },
  init: function () {
    const object3D = this.el.object3D
    const name = this.data.name
    object3D.visible = false

    const showImage = ({detail}) => {
      if (name != detail.name) {
        return
      }
      object3D.position.copy(detail.position)
      object3D.quaternion.copy(detail.rotation)
      object3D.scale.set(detail.scale, detail.scale, detail.scale)
      object3D.visible = true
    }

    const hideImage = ({detail}) => {
      if (name != detail.name) {
        return
      }
      object3D.visible = false
    }

    this.el.sceneEl.addEventListener('xrimagefound', showImage)
    this.el.sceneEl.addEventListener('xrimageupdated', showImage)
    this.el.sceneEl.addEventListener('xrimagelost', hideImage)
  }
})

xrfaceloading

Description

This event is emitted by xrface when when loading begins for additional face AR resources.

xrfaceloading.detail : {maxDetections, pointsPerDetection, indices, uvs}

Property Description
maxDetections The maximum number of faces that can be simultaneously processed.
pointsPerDetection Number of vertices that will be extracted per face.
indices: [{a, b, c}] Indexes into the vertices array that form the triangles of the requested mesh, as specified with meshGeometry on configure.
uvs: [{u, v}] uv positions into a texture map corresponding to the returned vertex points.

Example:

const initMesh = ({detail}) => {
  const {pointsPerDetection, uvs, indices} = detail
  this.el.object3D.add(generateMeshGeometry({pointsPerDetection, uvs, indices}))
}
this.el.sceneEl.addEventListener('xrfaceloading', initMesh)

xrfacescanning

Description

This event is emitted by xrface when all face AR resources have been loaded and scanning has begun.

xrfacescanning.detail : {maxDetections, pointsPerDetection, indices, uvs}

Property Description
maxDetections The maximum number of faces that can be simultaneously processed.
pointsPerDetection Number of vertices that will be extracted per face.
indices: [{a, b, c}] Indexes into the vertices array that form the triangles of the requested mesh, as specified with meshGeometry on configure.
uvs: [{u, v}] uv positions into a texture map corresponding to the returned vertex points.

Example:

const initMesh = ({detail}) => {
  const {pointsPerDetection, uvs, indices} = detail
  this.el.object3D.add(generateMeshGeometry({pointsPerDetection, uvs, indices}))
}
this.el.sceneEl.addEventListener('xrfacescanning', initMesh)

xrfacefound

Description

This event is emitted by xrface when a face is first found.

xrfacefound.detail : {id, transform, vertices, normals, attachmentPoints}

Property Description
id A numerical id of the located face.
transform: {position, rotation, scale, scaledWidth, scaledHeight, scaledDepth} Transform information of the located face.
vertices: [{x, y, z}] Position of face points, relative to transform.
normals: [{x, y, z}] Normal direction of vertices, relative to transform.
attachmentPoints: { name, position: {x,y,z} } See XR8.FaceController.AttachmentPoints for list of available attachment points. position is relative to the transform.

transform is an object with the following properties:

Property Description
position {x, y, z} The 3d position of the located face.
rotation {w, x, y, z} The 3d local orientation of the located face.
scale A scale factor that should be applied to objects attached to this face.
scaledWidth Approximate width of the head in the scene when multiplied by scale.
scaledHeight Approximate height of the head in the scene when multiplied by scale.
scaledDepth Approximate depth of the head in the scene when multiplied by scale.

Example:

const faceRigidComponent = {
  init: function () {
    const object3D = this.el.object3D
    object3D.visible = false
    const show = ({detail}) => {
      const {position, rotation, scale} = detail.transform
      object3D.position.copy(position)
      object3D.quaternion.copy(rotation)
      object3D.scale.set(scale, scale, scale)
      object3D.visible = true
    }
    const hide = ({detail}) => { object3D.visible = false }
    this.el.sceneEl.addEventListener('xrfacefound', show)
    this.el.sceneEl.addEventListener('xrfaceupdated', show)
    this.el.sceneEl.addEventListener('xrfacelost', hide)
  }
}

xrfaceupdated

Description

This event is emitted by xrface when face is subsequently found.

xrfaceupdated.detail : {id, transform, vertices, normals, attachmentPoints}

Property Description
id A numerical id of the located face.
transform: {position, rotation, scale, scaledWidth, scaledHeight, scaledDepth} Transform information of the located face.
vertices: [{x, y, z}] Position of face points, relative to transform.
normals: [{x, y, z}] Normal direction of vertices, relative to transform.
attachmentPoints: { name, position: {x,y,z} } See XR8.FaceController.AttachmentPoints for list of available attachment points. position is relative to the transform.

transform is an object with the following properties:

Property Description
position {x, y, z} The 3d position of the located face.
rotation {w, x, y, z} The 3d local orientation of the located face.
scale A scale factor that should be applied to objects attached to this face.
scaledWidth Approximate width of the head in the scene when multiplied by scale.
scaledHeight Approximate height of the head in the scene when multiplied by scale.
scaledDepth Approximate depth of the head in the scene when multiplied by scale.

Example:

const faceRigidComponent = {
  init: function () {
    const object3D = this.el.object3D
    object3D.visible = false
    const show = ({detail}) => {
      const {position, rotation, scale} = detail.transform
      object3D.position.copy(position)
      object3D.quaternion.copy(rotation)
      object3D.scale.set(scale, scale, scale)
      object3D.visible = true
    }
    const hide = ({detail}) => { object3D.visible = false }
    this.el.sceneEl.addEventListener('xrfacefound', show)
    this.el.sceneEl.addEventListener('xrfaceupdated', show)
    this.el.sceneEl.addEventListener('xrfacelost', hide)
  }
}

xrfacelost

Description

This event is emitted by xrface when a face is no longer being tracked.

xrfacelost.detail : {id}

Property Description
id A numerical id of the face that was lost.

Example:

const faceRigidComponent = {
  init: function () {
    const object3D = this.el.object3D
    object3D.visible = false
    const show = ({detail}) => {
      const {position, rotation, scale} = detail.transform
      object3D.position.copy(position)
      object3D.quaternion.copy(rotation)
      object3D.scale.set(scale, scale, scale)
      object3D.visible = true
    }
    const hide = ({detail}) => { object3D.visible = false }
    this.el.sceneEl.addEventListener('xrfacefound', show)
    this.el.sceneEl.addEventListener('xrfaceupdated', show)
    this.el.sceneEl.addEventListener('xrfacelost', hide)
  }
}

AFrame Event Listeners

This section describes the events that are listened for by the "xrweb" A-Frame component

You can emit these events in your web application to perform various actions:

Event Listener Description
hidecamerafeed Hides the camera feed. Tracking does not stop.
recenter Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.
screenshotrequest Emits a request to the engine to capture a screenshot of the AFrame canvas. The engine will emit a screenshotready event with the JPEG compressed image or screenshoterror if an error has occured.
showcamerafeed Shows the camera feed.
stopxr Stop the current XR session. While stopped, the camera feed is stopped and device motion is not tracked.

hidecamerafeed

scene.emit('hidecamerafeed')

Parameters

None

Description

Hides the camera feed. Tracking does not stop.

Example

let scene = this.el.sceneEl
scene.emit('hidecamerafeed')

recenter

scene.emit('recenter', {origin, facing})

Parameters

Parameter Description
origin: {x, y, z} [Optional] The location of the new origin.
facing: {w, x, y, z} [Optional] A quaternion representing direction the camera should face at the origin.

Description

Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.

If origin and facing are not provided, camera is reset to origin previously specified by a call to recenter or the last call to updateCameraProjectionMatrix(). Note: with A-Frame, updateCameraProjectionMatrix() initially gets called based on initial camera position in the scene.

Example

let scene = this.el.sceneEl
scene.emit('recenter')

// OR

let scene = this.el.sceneEl
scene.emit('recenter', {
  origin: {x: 1, y: 4, z: 0},
  facing: {w: 0.9856, x:0, y:0.169, z:0}
})

screenshotrequest

scene.emit('screenshotrequest')

Parameters

None

Description

Emits a request to the engine to capture a screenshot of the AFrame canvas. The engine will emit a screenshotready event with the JPEG compressed image or screenshoterror if an error has occured.

Example

const scene = this.el.sceneEl
const photoButton = document.getElementById('photoButton')

// Emit screenshotrequest when user taps
photoButton.addEventListener('click', () => {
  image.src = ""
  scene.emit('screenshotrequest')
})

scene.addEventListener('screenshotready', event => {
  image.src = 'data:image/jpeg;base64,' + event.detail
})

scene.addEventListener('screenshoterror', event => {
  console.log("error")
})

showcamerafeed

scene.emit('showcamerafeed')

Parameters

None

Description

Shows the camera feed.

Example

let scene = this.el.sceneEl
scene.emit('showcamerafeed')

stopxr

scene.emit('stopxr')

Parameters

None

Description

Stop the current XR session. While stopped, the camera feed is stopped and device motion is not tracked.

Example

let scene = this.el.sceneEl
scene.emit('stopxr')

XR8.Babylonjs

Babylon.js (https://www.babylonjs.com/) is a complete JavaScript framework for building 3D games and experiences with HTML5 and WebGL. Combined with 8th Wall Web, you can create powerful Web AR experiences.

Tutorial Video:

Description

Provides an integration that interfaces with the BabylonJS environment and lifecyle to drive the Babylon.js camera to do virtual overlays.

Functions

Function Description
xrCameraBehavior Get a behavior that can be attached to a Babylon camera to run World Tracking and/or Image Targets.
faceCameraBehavior Get a behavior that can be attached to a Babylon camera to run Face Effects.

XR8.Babylonjs.faceCameraBehavior()

XR8.Babylonjs.faceCameraBehavior(config, faceConfig)

Description

Get a behavior that can be attached to a Babylon camera like so: camera.addBehavior(XR8.Babylonjs.faceCameraBehavior())

Parameters

Parameter Description
config [Optional] Configuration parameters to pass to XR8.run()
faceConfig [Optional] Face configuration parameters to pass to XR8.FaceController

config [Optional] is an object with the following properties:

Property Type Default Description
webgl2 [Optional] bool false If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1.
ownRunLoop [Optional] bool true If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only]
cameraConfig: {direction} [Optional] object {direction: XR8.XrConfig.camera().BACK} Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT
glContextConfig [Optional] WebGLContextAttributes null The attributes to configure the WebGL canvas context.
allowedDevices [Optional] XR8.XrConfig.device() XR8.XrConfig.device().MOBILE Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY, always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE.

faceConfig [Optional] is an object with the following properties:

Parameter Description
nearClip [Optional] The distance from the camera of the near clip plane. By default it will use the Babylon camera.minZ
farClip [Optional] The distance from the camera of the far clip plane. By default it will use the Babylon camera.maxZ
meshGeometry [Optional] List that contains which parts of the head geometry are visible. Options are: [XR8.FaceController.MeshGeometry.FACE, XR8.FaceController.MeshGeometry.EYES, XR8.FaceController.MeshGeometry.NOSE,]. The default is [XR8.FaceController.MeshGeometry.FACE]
imageTargets [Optional] List of names of the image target to detect. Can be modified at runtime. Note: All currently active image targets will be replaced with the ones specified in this list.
leftHandedAxes [Optional] If true, use left-handed coordinates.
imageTargets [Optional] If true, flip left and right in the output.

Returns

A Babylon JS behavior that connects the Face Effects engine to the Babylon camera and starts the camera feed and tracking.

Example

const startScene = (canvas) => {
  const engine = new BABYLON.Engine(canvas, true /* antialias */)
  const scene = new BABYLON.Scene(engine)
  scene.useRightHandedSystem = false
  
  const camera = new BABYLON.FreeCamera('camera', new BABYLON.Vector3(0, 0, 0), scene)
  camera.rotation = new BABYLON.Vector3(0, scene.useRightHandedSystem ? Math.PI : 0, 0)
  camera.minZ = 0.0001
  camera.maxZ = 10000

  // Add a light to the scene  
  const directionalLight = 
  new BABYLON.DirectionalLight("DirectionalLight", new BABYLON.Vector3(-5, -10, 7), scene)
  directionalLight.intensity = 0.5
  
  // Mesh logic
  const faceMesh = new BABYLON.Mesh("face", scene);
  const material = new BABYLON.StandardMaterial("boxMaterial", scene)
  material.diffuseColor = new BABYLON.Color3(173 / 255.0, 80 / 255.0, 255 / 255.0)
  faceMesh.material = material
  
  let facePoints = []

  const runConfig = {
    cameraConfig: {XR8.XrConfig.camera().FRONT},
    allowedDevices: XR8.XrConfig.device().ANY,
    verbose: true,
  }

  camera.addBehavior(XR8.Babylonjs.faceCameraBehavior(runConfig)) // Connect camera to XR and show camera feed.

  engine.runRenderLoop(() => {
    scene.render()
  })
}

XR8.Babylonjs.xrCameraBehavior()

XR8.Babylonjs.xrCameraBehavior(config, xrConfig)

Description

Get a behavior that can be attached to a Babylon camera like so: camera.addBehavior(XR8.Babylonjs.xrCameraBehavior())

Parameters

Parameter Description
config [Optional] Configuration parameters to pass to XR8.run()
xrConfig [Optional] Configuration parameters to pass to XR8.XrController

config [Optional] is an object with the following properties:

Property Type Default Description
webgl2 [Optional] bool false If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1.
ownRunLoop [Optional] bool false If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only]
cameraConfig: {direction} [Optional] object {direction: XR8.XrConfig.camera().BACK} Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT
glContextConfig [Optional] WebGLContextAttributes null The attributes to configure the WebGL canvas context.
allowedDevices [Optional] XR8.XrConfig.device() XR8.XrConfig.device().MOBILE Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY, always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE.

xrConfig [Optional] is an object with the following properties:

Parameter Description
enableLighting [Optional] If true, return an estimate of lighting information.
enableWorldPoints [Optional] If true, return the map points used for tracking.
disableWorldTracking [Optional] If true, turn off SLAM tracking for efficiency.
imageTargets [Optional] List of names of the image target to detect. Can be modified at runtime. Note: All currently active image targets will be replaced with the ones specified in this list.
leftHandedAxes [Optional] If true, use left-handed coordinates.
imageTargets [Optional] If true, flip left and right in the output.

Returns

A Babylon JS behavior that connects the XR engine to the Babylon camera and starts the camera feed and tracking.

Example

let surface, engine, scene, camera

const startScene = () => {
  const canvas = document.getElementById('renderCanvas')

  engine = new BABYLON.Engine(canvas, true, { stencil: true, preserveDrawingBuffer: true })
  engine.enableOfflineSupport = false

  scene = new BABYLON.Scene(engine)
  camera = new BABYLON.FreeCamera('camera', new BABYLON.Vector3(0, 3, 0), scene)

  initXrScene({ scene, camera }) // Add objects to the scene and set starting camera position.

  // Connect the camera to the XR engine and show camera feed
  camera.addBehavior(XR8.Babylonjs.xrCameraBehavior())

  engine.runRenderLoop(() => {
    scene.render()
  })

  window.addEventListener('resize', () => {
    engine.resize()
  })
}

BabylonJS Observables

Image Target Observables

onXrImageLoadingObservable: Fires when detection image loading begins.

onXrImageLoadingObservable : { imageTargets: {name, type, metadata} }

onXrImageScanningObservable: Fires when all detection images have been loaded and scanning has begun.

onXrImageScanningObservable : { imageTargets: {name, type, metadata, geometry} }

onXrImageFoundObservable: Fires when an image target is first found.

onXrImageFoundObservable : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

onXrImageUpdatedObservable: Fires when an image target changes position, rotation or scale.

onXrImageUpdatedObservable : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

onXrImageLostObservable: Fires when an image target is no longer being tracked.

onXrImageLostObservable : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

Face Effects Observables

onFaceLoadingObservable: Fires when loading begins for additional face AR resources.

onFaceLoadingObservable : {maxDetections, pointsPerDetection, indices, uvs}

onFaceScanningObservable: Fires when all face AR resources have been loaded and scanning has begun.

onFaceScanningObservable: {maxDetections, pointsPerDetection, indices, uvs}

onFaceFoundObservable: Fires when a face is first found.

onFaceFoundObservable : {id, transform, attachmentPoints, vertices, normals}

onFaceUpdatedObservable: Fires when a face is subsequently found.

onFaceUpdatedObservable : {id, transform, attachmentPoints, vertices, normals}

onFaceLostObservable: Fires when a face is no longer being tracked.

onFaceLostObservable : {id}

Image Target Example

scene.onXrImageUpdatedObservable.add(e => {
  target.position.copyFrom(e.position)
  target.rotationQuaternion.copyFrom(e.rotation)
  target.scaling.set(e.scale, e.scale, e.scale)
})

Face Effects Example

// this is called when the face is first found.  It provides the static information about the
// face such as the UVs and indices
scene.onFaceLoadingObservable.add((event) => {
  const {indices, maxDetections, pointsPerDetection, uvs} = event

  // Babylon expects all vertex data to be a flat list of numbers
  facePoints = Array(pointsPerDetection)
  for (let i = 0; i < pointsPerDetection; i++) {
    const facePoint = BABYLON.MeshBuilder.CreateBox("box", {size: 0.02}, scene)
    facePoint.material = material
    facePoint.parent = faceMesh
    facePoints[i] = facePoint
  }
})

// this is called each time the face is updated which is on a per-frame basis
scene.onFaceUpdatedObservable.add((event) => {
  const {vertices, normals, transform} = event;
  const {scale, position, rotation} = transform
  
  vertices.map((v, i) => {
    facePoints[i].position.x = v.x
    facePoints[i].position.y = v.y
    facePoints[i].position.z = v.z
  })

  faceMesh.scalingDeterminant = scale
  faceMesh.position = position
  faceMesh.rotationQuaternion = rotation
})

CameraPipelineModule

8th Wall camera applications are built using a camera pipeline module framework. Applications install modules which then control the behavior of the application at runtime.

Refer to XR8.addCameraPipelineModule() for details on adding camera pipeline modules to your application.

A camera pipeline module object must have a .name string which is unique within the application. It should implement one or more of the following camera lifecycle methods. These methods will be executed at the appropriate point in the run loop.

During the main runtime of an application, each camera frame goes through the following cycle:

onBeforeRun -> onCameraStatusChange (requesting -> hasStream -> hasVideo | failed) -> onStart -> onAttach -> onProcessGpu -> onProcessCpu -> onUpdate -> onRender

Camera modules should implement one or more of the following camera lifecycle methods:

Function Description
onAppResourcesLoaded Called when we have received the resources attached to an app from the server.
onAttach Called before the first time a module receives frame updates. It is called on modules that were added either before or after the pipeline is running.
onBeforeRun Called immediately after XR8.run(). If any promises are returned, XR will wait on all promises before continuing.
onCameraStatusChange Called when a change occurs during the camera permissions request.
onCanvasSizeChange Called when the canvas changes size.
onDetach Called after the last time a module receives frame updates. This is either after stop is called, or after the module is manually removed from the pipeline.
onDeviceOrientationChange Called when the device changes landscape/portrait orientation.
onException Called when an error occurs in XR. Called with the error object.
onPaused Called when XR8.pause() is called.
onProcessCpu Called to read results of GPU processing and return usable data.
onProcessGpu Called to start GPU processing.
onRender Called after onUpdate. This is the time for the rendering engine to issue any WebGL drawing commands. If an application is providing its own run loop and is relying on XR8.runPreRender() and XR8.runPostRender(), this method is not called and all rendering must be coordinated by the external run loop.
onResume Called when XR8.resume() is called.
onStart Called when XR starts. First callback after XR8.run() is called.
onUpdate Called to update the scene before render. Data returned by modules in onProcessGpu and onProcessCpu will be present as processGpu.modulename and processCpu.modulename where the name is given by module.name = "modulename".
onVideoSizeChange Called when the canvas changes size.
requiredPermissions Modules can indicate what browser capabilities they require that may need permissions requests. These can be used by the framework to request appropriate permissions if absent, or to create components that request the appropriate permissions before running XR.

Note: Camera modules that implement onProcessGpu or onProcessCpu can provide data to subsequent stages of the pipeline. This is done by the module's name.

onAppResourcesLoaded()

onAppResourcesLoaded: ({ framework, imageTargets, version })

Description

Called when we have received the resources attached to an app from the server.

Parameters

Parameter Description
framework The framework bindings for this module for dispatching events.
imageTargets [Optional] An array of image targets with the fields {imagePath, metadata, name}
version The engine version, e.g. 14.0.8.949

Example

XR8.addCameraPipelineModule({
  name = 'myPipelineModule',
  onAppResourcesLoaded = ({ framework, version, imageTargets }) => {
    //...
  },
})

onAttach()

onAttach: ({framework, canvas, GLctx, isWebgl2, orientation, videoWidth, videoHeight, canvasWidth, canvasHeight, status, stream, video, version, imageTargets, config})

Description

onAttach() is called before the first time a module receives frame updates. It is called on modules that were added either before or after the pipeline is running. It includes all the most recent data available from:

  • onStart
  • onDeviceOrientationChange
  • onCanvasSizeChange
  • onVideoSizeChange
  • onCameraStatusChange
  • onAppResourcesLoaded

Parameters

Parameter Description
framework The framework bindings for this module for dispatching events.
canvas The canvas that backs GPU processing and user display.
GLctx The WebGLRenderingContext or WebGL2RenderingContext.
isWebgl2 True if GLCtx is a WebGL2RenderingContext.
orientation The rotation of the ui from portrait, in degrees (-90, 0, 90, 180).
videoWidth The height of the camera feed, in pixels.
videoHeight The height of the camera feed, in pixels.
canvasWidth The width of the GLctx canvas, in pixels.
canvasHeight The height of the GLctx canvas, in pixels.
status One of [ 'requesting', 'hasStream', 'hasVideo', 'failed' ]
stream The MediaStream associated with the camera feed.
video The video dom element displaying the stream.
version [Optional] The engine version, e.g. 14.0.8.949, if app resources are loaded.
imageTargets [Optional] An array of image targets with the fields {imagePath, metadata, name}
config The configuration parameters that were passed to XR8.run().

onBeforeRun()

onBeforeRun: ({ config })

Description

onBeforeRun is called immediately after XR8.run(). If any promises are returned, XR will wait on all promises before continuing.

Parameters

Parameter Description
config The configuration parameters that were passed to XR8.run().

onCameraStatusChange()

onCameraStatusChange: ({ status, stream, video, config })

Description

Called when a change occurs during the camera permissions request.

Called with the status, and, if applicable, a reference to the newly available data. The typical status flow will be:

requesting -> hasStream -> hasVideo.

Parameters

Parameter Description
status One of [ 'requesting', 'hasStream', 'hasVideo', 'failed' ]
stream: [Optional] The MediaStream associated with the camera feed, if status is hasStream.
video: [Optional] The video DOM element displaying the stream, if status is hasVideo.
config The configuration parameters that were passed to XR8.run(), if status is "requesting".

The status parameter has the following states:

State Description
requesting In 'requesting', the browser is opening the camera, and if applicable, checking the user permissons. In this state, it is appropriate to display a prompt to the user to accept camera permissions.
hasStream Once the user permissions are granted and the camera is successfully opened, the status switches to 'hasStream' and any user prompts regarding permissions can be dismissed.
hasVideo Once camera frame data starts to be available for processing, the status switches to 'hasVideo', and the camera feed can begin displaying.
failed If the camera feed fails to open, the status is 'failed'. In this case it's possible that the user has denied permissions, and so helping them to re-enable permissions is advisable.

Example

XR8.addCameraPipelineModule({
  name = 'camerastartupmodule',
  onCameraStatusChange = ({status}) => {
    if (status == 'requesting') {
      myApplication.showCameraPermissionsPrompt()
    } else if (status == 'hasStream') {
      myApplication.dismissCameraPermissionsPrompt()
    } else if (status == 'hasVideo') {
      myApplication.startMainApplictation()
    } else if (status == 'failed') {
      myApplication.promptUserToChangeBrowserSettings()
    }
  },
})

onCanvasSizeChange()

onCanvasSizeChange: ({ GLctx, videoWidth, videoHeight, canvasWidth, canvasHeight })

Description

Called when the canvas changes size. Called with dimensions of video and canvas.

Parameters

Parameter Description
GLctx The WebGLRenderingContext or WebGL2RenderingContext.
videoWidth The width of the camera feed, in pixels.
videoHeight The height of the camera feed, in pixels.
canvasWidth The width of the GLctx canvas, in pixels.
canvasHeight The height of the GLctx canvas, in pixels.

Example

XR8.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onCanvasSizeChange: ({ GLctx, videoWidth, videoHeight, canvasWidth, canvasHeight }) => {
    myHandleResize({ GLctx, videoWidth, videoHeight, canvasWidth, canvasHeight })
  },
})

onDetach()

onDetach: ({framework})

Description

onDetach is called after the last time a module receives frame updates. This is either after stop is called, or after the module is manually removed from the pipeline.

Parameters

Parameter Description
framework The framework bindings for this module for dispatching events.

onDeviceOrientationChange()

onDeviceOrientationChange: ({ GLctx, videoWidth, videoHeight, orientation })

Description

Called when the device changes landscape/portrait orientation.

Parameters

Parameter Description
GLctx The WebGLRenderingContext or WebGL2RenderingContext.
videoWidth The width of the camera feed, in pixels.
videoHeight The height of the camera feed, in pixels.
orientation The rotation of the ui from portrait, in degrees (-90, 0, 90, 180).

Example

XR8.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onDeviceOrientationChange: ({ GLctx, videoWidth, videoHeight, orientation }) => {
    // handleResize({ GLctx, videoWidth, videoHeight, orientation })
  },
})

onException()

onException: (error)

Description

Called when an error occurs in XR. Called with the error object.

Parameters

Parameter Description
error The error object that was thrown

Example

XR8.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
    onException : (error) => {
      console.error('XR threw an exception', error)
  },
})

onPaused()

onPaused: ()

Description

Called when XR8.pause() is called.

Parameters

None

Example

XR8.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onPaused: () => {
    console.log('pausing application')
  },
})

onProcessGpu()

onProcessGpu: ({ framework, frameStartResult })

Description

Called to start GPU processing

Parameters

Parameter Description
framework { dispatchEvent(eventName, detail) } : Emits a named event with the supplied detail.
frameStartResult { cameraTexture, GLctx, textureWidth, textureHeight, orientation, videoTime, repeatFrame }

The frameStartResult parameter has the following properties:

Property Description
cameraTexture The WebGLTexture containing camera feed data.
GLctx The WebGLRenderingContext or WebGL2RenderingContext.
textureWidth The width (in pixels) of the camera feed texture.
textureHeight The height (in pixels) of the camera feed texture.
orientation The rotation of the ui from portrait, in degrees (-90, 0, 90, 180).
videoTime The timestamp of this video frame.
repeatFrame True if the camera feed has not updated since the last call.

Returns

Any data that you wish to provide to onProcessCpu and onUpdate should be returned. It will be provided to those methods as processGpuResult.modulename

Example

XR8.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onProcessGpu: ({frameStartResult}) => {
    const {cameraTexture, GLctx, textureWidth, textureHeight} = frameStartResult

    if(!cameraTexture.name){
      console.error("[index] Camera texture does not have a name")
    }

    const restoreParams = XR8.GlTextureRenderer.getGLctxParameters(GLctx, [GLctx.TEXTURE0])
    // Do relevant GPU processing here
    ...
    XR8.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)

    // These fields will be provided to onProcessCpu and onUpdate
    return {gpuDataA, gpuDataB}
  },
})

onProcessCpu()

onProcessCpu: ({ framework, frameStartResult, processGpuResult })

Description

Called to read results of GPU processing and return usable data. Called with { frameStartResult, processGpuResult }. Data returned by modules in onProcessGpu will be present as processGpu.modulename where the name is given by module.name = "modulename".

Parameter Description
framework The framework bindings for this module for dispatching events.
frameStartResult The data that was provided at the beginning of a frame.
processGpuResult Data returned by all installed modules during onProcessGpu.

Returns

Any data that you wish to provide to onUpdate should be returned. It will be provided to that method as processCpuResult.modulename

Example

XR8.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onProcessCpu: ({ frameStartResult, processGpuResult }) => {
    const GLctx = frameStartResult.GLctx
    const { cameraTexture } = frameStartResult
    const { camerapixelarray, mycamerapipelinemodule } = processGpuResult

    // Do something interesting with mycamerapipelinemodule.gpuDataA and mycamerapipelinemodule.gpuDataB
    ...
    
    // These fields will be provided to onUpdate
    return {cpuDataA, cpuDataB}
  },
})

onRender()

onRender: ()

Description

Called after onUpdate. This is the time for the rendering engine to issue any WebGL drawing commands. If an application is providing its own run loop and is relying on XR8.runPreRender() and XR8.runPostRender(), this method is not called and all rendering must be coordinated by the external run loop.

Parameters

None

Example

XR8.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onRender: () => {
    // This is already done by XR8.Threejs.pipelineModule() but is provided here as an illustration.
    XR8.Threejs.xrScene().renderer.render()
  },
})

onResume()

onResume: ()

Description

Called when XR8.resume() is called.

Parameters

None

Example

XR8.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onResume: () => {
    console.log('resuming application')
  },
})

onStart()

onStart: ({ canvas, GLctx, isWebgl2, orientation, videoWidth, videoHeight, canvasWidth, canvasHeight, config })

Description

Called when XR starts.

Parameters

Parameter Description
canvas The canvas that backs GPU processing and user display.
GLctx The WebGLRenderingContext or WebGL2RenderingContext.
isWebgl2 True if GLCtx is a WebGL2RenderingContext.
orientation The rotation of the ui from portrait, in degrees (-90, 0, 90, 180).
videoWidth The height of the camera feed, in pixels.
videoHeight The height of the camera feed, in pixels.
canvasWidth The width of the GLctx canvas, in pixels.
canvasHeight The height of the GLctx canvas, in pixels.
config The configuration parameters that were passed to XR8.run().

Example

XR8.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onStart: ({canvasWidth, canvasHeight}) => {
    // Get the 3js scene. This was created by XR8.Threejs.pipelineModule().onStart(). The
    // reason we can access it here now is because 'mycamerapipelinemodule' was installed after
    // XR8.Threejs.pipelineModule().
    const {scene, camera} = XR8.Threejs.xrScene()

    // Add some objects to the scene and set the starting camera position.
    myInitXrScene({scene, camera})

    // Sync the xr controller's 6DoF position and camera paremeters with our scene.
    XR8.XrController.updateCameraProjectionMatrix({
      origin: camera.position,
      facing: camera.quaternion,
    })
  },
})

onUpdate()

onUpdate: ({ framework, frameStartResult, processGpuResult, processCpuResult })

Description

Called to update the scene before render. Called with { framework, frameStartResult, processGpuResult, processCpuResult }. Data returned by modules in onProcessGpu and onProcessCpu will be present as processGpu.modulename and processCpu.modulename where the name is given by module.name = "modulename".

Parameters

Parameter Description
framework The framework bindings for this module for dispatching events.
frameStartResult The data that was provided at the beginning of a frame.
processGpuResult Data returned by all installed modules during onProcessGpu.
processCpuResult Data returned by all installed modules during onProcessCpu.

Example

XR8.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onUpdate: ({ frameStartResult, processGpuResult, processCpuResult }) => {
    if (!processCpuResult.reality) {
      return
    }
    const {rotation, position, intrinsics} = processCpuResult.reality
    const {cpuDataA, cpuDataB} = processCpuResult.mycamerapipelinemodule
    // ...
  },
})

onVideoSizeChange()

onVideoSizeChange: ({ GLctx, videoWidth, videoHeight, canvasWidth, canvasHeight, orientation })

Description

Called when the canvas changes size. Called with dimensions of video and canvas as well as device orientation.

Parameters

Parameters Description
GLctx The WebGLRenderingContext or WebGL2RenderingContext.
videoWidth The width of the camera feed, in pixels.
videoHeight The height of the camera feed, in pixels.
canvasWidth The width of the GLctx canvas, in pixels.
canvasHeight The height of the GLctx canvas, in pixels.
orientation The rotation of the ui from portrait, in degrees (-90, 0, 90, 180).

Example

XR8.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onVideoSizeChange: ({ GLctx, videoWidth, videoHeight, canvasWidth, canvasHeight }) => {
    myHandleResize({ GLctx, videoWidth, videoHeight, canvasWidth, canvasHeight })
  },
})

requiredPermissions()

requiredPermissions: ([permissions])

Description

requiredPermissions is used to define the list of permissions required by a pipeline module.

Parameters

Parameter Description
permissions An array of XR8.XrPermissions.permissions() required by the pipeline module.

Example

XR8.addCameraPipelineModule({
  name: 'request-gyro',
  requiredPermissions: () => ([XR8.XrPermissions.permissions().DEVICE_ORIENTATION]),
})

XR8.CameraPixelArray

Description

Provides a camera pipeline module that gives access to camera data as a grayscale or color uint8 array.

Functions

Function Description
pipelineModule A pipeline module that provides the camera texture as an array of RGBA or grayscale pixel values that can be used for CPU image processing.

XR8.CameraPixelArray.pipelineModule()

XR8.CameraPixelArray.pipelineModule({ luminance, maxDimension, width, height })

Description

A pipeline module that provides the camera texture as an array of RGBA or grayscale pixel values that can be used for CPU image processing.

Parameters

Parameter Default Description
luminance [Optional] false If true, output grayscale instead of RGBA
maxDimension: [Optional] The size in pixels of the longest dimension of the output image. The shorter dimension will be scaled relative to the size of the camera input so that the image is resized without cropping or distortion.
width [Optional] The width of the camera feed texture. Width of the output image. Ignored if maxDimension is specified.
height [Optional] The height of the camera feed texture. Height of the output image. Ignored if maxDimension is specified.

Returns

Return value is an object made available to onProcessCpu and onUpdate as:

processGpuResult.camerapixelarray: {rows, cols, rowBytes, pixels}

Property Description
rows Height in pixels of the output image.
cols Width in pixels of the output image.
rowBytes Number of bytes per row of the output image.
pixels A UInt8Array of pixel data.
srcTex A texture containing the source image for the returned pixels.

Example

XR8.addCameraPipelineModule(XR8.CameraPixelArray.pipelineModule({ luminance: true }))
XR8.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onProcessCpu: ({ processGpuResult }) => {
    const { camerapixelarray } = processGpuResult
    if (!camerapixelarray || !camerapixelarray.pixels) {
      return
    }
    const { rows, cols, rowBytes, pixels } = camerapixelarray

    ...
  },

XR8.CanvasScreenshot

Description

Provides a camera pipeline module that can generate screenshots of the current scene.

Functions

Function Description
configure Configures the expected result of canvas screenshots.
pipelineModule Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started and when the canvas size has changed.
setForegroundCanvas Sets a foreground canvas to be displayed on top of the camera canvas. This must be the same dimensions as the camera canvas.
takeScreenshot Returns a Promise that when resolved, provides a buffer containing the JPEG compressed image. When rejected, an error message is provided.

XR8.CanvasScreenshot.configure()

XR8.CanvasScreenshot.configure({ maxDimension, jpgCompression })

Description

Configures the expected result of canvas screenshots.

Parameters

Parameter Default Description
maxDimension: [Optional] 1280 The value of the largest expected dimension.
jpgCompression: [Optional] 75 1-100 value representing the JPEG compression quality. 100 is little to no loss, and 1 is a very low quality image.

Example

XR8.CanvasScreenshot.configure({ maxDimension: 640, jpgCompression: 50 })

XR8.CanvasScreenshot.pipelineModule()

XR8.CanvasScreenshot.pipelineModule()

Description

Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started and when the canvas size has changed.

Parameters

None

Returns

A CanvasScreenshot pipeline module that can be added via XR8.addCameraPipelineModule().

Example

XR8.addCameraPipelineModule(XR8.CanvasScreenshot.pipelineModule())

XR8.CanvasScreenshot.setForegroundCanvas()

XR8.CanvasScreenshot.setForegroundCanvas(canvas)

Description

Sets a foreground canvas to be displayed on top of the camera canvas. This must be the same dimensions as the camera canvas.

Only required if you use separate canvases for camera feed vs virtual objects.

Parameters

Parameter Description
canvas The canvas to use as a foreground in the screenshot

Example

const myOtherCanvas = document.getElementById('canvas2')
XR8.CanvasScreenshot.setForegroundCanvas(myOtherCanvas)

XR8.CanvasScreenshot.takeScreenshot()

XR8.CanvasScreenshot.takeScreenshot({ onProcessFrame })

Description

Returns a Promise that when resolved, provides a buffer containing the JPEG compressed image. When rejected, an error message is provided.

Parameters

Parameter Description
onProcessFrame [Optional] Callback where you can implement additional drawing to the screenshot 2d canvas.

Example

XR8.addCameraPipelineModule(XR8.canvasScreenshot().cameraPipelineModule())
XR8.canvasScreenshot().takeScreenshot().then(
  data => {
    // myImage is an <img> HTML element
    const image = document.getElementById('myImage')
    image.src = 'data:image/jpeg;base64,' + data
  },
  error => {
    console.log(error)
    // Handle screenshot error.
  })
})

XR8.FaceController

Description

FaceController provides face detection and meshing, and interfaces for configuring tracking.

Functions

Function Description
configure Configures what processing is performed by FaceController.
pipelineModule Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started, camera proessing events, and other state changes. These are used to calculate the camera's position.
AttachmentPoints Points on the face you can anchor content to.
MeshGeometry Options for defining which portions of the face have mesh triangles returned.

XR8.FaceController.configure()

XR8.FaceController.configure({ nearClip, farClip, meshGeometry, coordinates })

Description

Configures what processing is performed by FaceController.

Parameters

Parameter Description
nearClip [Optional] The distance from the camera of the near clip plane.
farClip [Optional] The distance from the camera of the far clip plane.
meshGeometry [Optional] List that contains which parts of the head geometry are visible. Options are: [XR8.FaceController.MeshGeometry.FACE, XR8.FaceController.MeshGeometry.EYES, XR8.FaceController.MeshGeometry.NOSE,]. The default is [XR8.FaceController.MeshGeometry.FACE]
coordinates [Optional] {origin, scale, axes, mirroredDisplay}

coordinates [Optional] is an object with the following properties:

Parameter Description
origin [Optional] {position: {x, y, z}, rotation: {w, x, y, z}} of the camera.
scale [Optional] Scale of the scene.
axes [Optional] 'LEFT_HANDED' or 'RIGHT_HANDED'. Default is 'RIGHT_HANDED'
mirroredDisplay [Optional] If true, flip left and right in the output.

IMPORTANT: FaceController and XrController cannot be used as the same time.

Example

  XR8.FaceController.configure({
    meshGeometry: [XR8.FaceController.MeshGeometry.FACE],
    coordinates: {
      mirroredDisplay: true,
      axes: 'RIGHT_HANDED',
    },
  })

XR8.FaceController.pipelineModule()

XR8.FaceController.pipelineModule()

Parameters

None

Description

Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started, camera proessing events, and other state changes. These are used to calculate the camera's position.

Returns

Return value is an object made available to onUpdate as:

processCpuResult.reality: { rotation, position, intrinsics, cameraFeedTexture }

Property Description
rotation: {w, x, y, z} The orientation (quaternion) of the camera in the scene.
position: {x, y, z} The position of the camera in the scene.
intrinsics A column-major 4x4 projection matrix that gives the scene camera the same field of view as the rendered camera feed.
cameraFeedTexture The WebGLTexture containing camera feed data.

Dispatched Events

faceloading: Fires when loading begins for additional face AR resources.

faceloading.detail : {maxDetections, pointsPerDetection, indices, uvs}

Property Description
maxDetections The maximum number of faces that can be simultaneously processed.
pointsPerDetection Number of vertices that will be extracted per face.
indices: [{a, b, c}] Indexes into the vertices array that form the triangles of the requested mesh, as specified with meshGeometry on configure.
uvs: [{u, v}] uv positions into a texture map corresponding to the returned vertex points.

facescanning: Fires when all face AR resources have been loaded and scanning has begun.

facescanning.detail : {maxDetections, pointsPerDetection, indices, uvs}

Property Description
maxDetections The maximum number of faces that can be simultaneously processed.
pointsPerDetection Number of vertices that will be extracted per face.
indices: [{a, b, c}] Indexes into the vertices array that form the triangles of the requested mesh, as specified with meshGeometry on configure.
uvs: [{u, v}] uv positions into a texture map corresponding to the returned vertex points.

facefound: Fires when a face first found.

facefound.detail : {id, transform, vertices, normals, attachmentPoints}

Property Description
id A numerical id of the located face.
transform: {position, rotation, scale, scaledWidth, scaledHeight, scaledDepth} Transform information of the located face.
vertices: [{x, y, z}] Position of face points, relative to transform.
normals: [{x, y, z}] Normal direction of vertices, relative to transform.
attachmentPoints: { name, position: {x,y,z} } See XR8.FaceController.AttachmentPoints for list of available attachment points. position is relative to the transform.

transform is an object with the following properties:

Property Description
position {x, y, z} The 3d position of the located face.
rotation {w, x, y, z} The 3d local orientation of the located face.
scale A scale factor that should be applied to objects attached to this face.
scaledWidth Approximate width of the head in the scene when multiplied by scale.
scaledHeight Approximate height of the head in the scene when multiplied by scale.
scaledDepth Approximate depth of the head in the scene when multiplied by scale.

faceupdated: Fires when a face is subsequently found.

faceupdated.detail : {id, transform, vertices, normals, attachmentPoints}

Property Description
id A numerical id of the located face.
transform: {position, rotation, scale, scaledWidth, scaledHeight, scaledDepth} Transform information of the located face.
vertices: [{x, y, z}] Position of face points, relative to transform.
normals: [{x, y, z}] Normal direction of vertices, relative to transform.
attachmentPoints: { name, position: {x,y,z} } See XR8.FaceController.AttachmentPoints for list of available attachment points. position is relative to the transform.

transform is an object with the following properties:

Property Description
position {x, y, z} The 3d position of the located face.
rotation {w, x, y, z} The 3d local orientation of the located face.
scale A scale factor that should be applied to objects attached to this face.
scaledWidth Approximate width of the head in the scene when multiplied by scale.
scaledHeight Approximate height of the head in the scene when multiplied by scale.
scaledDepth Approximate depth of the head in the scene when multiplied by scale.

facelost: Fires when a face is no longer being tracked.

facelost.detail : { id }

Property Description
id A numerical id of the face that was lost.

Example - adding pipeline module

XR8.addCameraPipelineModule(XR8.FaceController.pipelineModule())

XR8.FaceController.AttachmentPoints

Enumeration

Description

Points of the face you can anchor content to.

Properties

Property Value Description
FOREHEAD forehead Forehead
RIGHT_EYEBROW_INNER rightEyebrowInner Inner side of right eyebrow
RIGHT_EYEBROW_MIDDLE rightEyebrowMiddle Middle of right eyebrow
RIGHT_EYEBROW_OUTER rightEyebrowOuter Outer side of right eyebrow
LEFT_EYEBROW_INNER leftEyebrowInner Inner side of left eyebrow
LEFT_EYEBROW_MIDDLE leftEyebrowMiddle Middle of left eyebrow
LEFT_EYEBROW_OUTER leftEyebrowOuter Outer side of left eyebrow
LEFT_EAR leftEar Left ear
RIGHT_EAR rightEar Right ear
LEFT_CHEEK leftCheek Left cheek
RIGHT_CHEEK rightCheek Right cheek
NOSE_BRIDGE noseBridge Bridge of the nose
NOSE_TIP noseTip Tip of the nose
LEFT_EYE leftEye Left eye
RIGHT_EYE rightEye Right eye
LEFT_EYE_OUTER_CORNER leftEyeOuterCorner Outer corner of left eye
RIGHT_EYE_OUTER_CORNER rightEyeOuterCorner Outer corner of right eye
UPPER_LIP upperLip Upper lip
LOWER_LIP lowerLip Lower lip
MOUTH mouth Mouth
MOUTH_RIGHT_CORNER mouthRightCorner Right corner of mouth
MOUTH_LEFT_CORNER mouthLeftCorner Left corner of mouth
CHIN chin Chin

XR8.FaceController.MeshGeometry

Enumeration

Description

Options for defining which portions of the face have mesh triangles returned.

Properties

Property Value Description
FACE face Return geometry for the face.
MOUTH mouth Return geometry for the mouth.
EYES eyes Return geometry for the eyes.

XR8.GlTextureRenderer

Description

Provides a camera pipeline module that draws the camera feed to a canvas as well as extra utilities for GL drawing operations.

Functions

Function Description
configure Configures the pipeline module that draws the camera feed to the canvas.
create Creates an object for rendering from a texture to a canvas or another texture.
fillTextureViewport Convenience method for getting a Viewport struct that fills a texture or canvas from a source without distortion. This is passed to the render method of the object created by GlTextureRenderer.create()
getGLctxParameters Gets the current set of WebGL bindings so that they can be restored later.
pipelineModule Creates a pipeline module that draws the camera feed to the canvas.
setGLctxParameters Restores the WebGL bindings that were saved with getGLctxParameters.
setTextureProvider Sets a provider that passes the texture to draw.

XR8.GlTextureRenderer.configure()

XR8.GlTextureRenderer.configure({ vertexSource, fragmentSource, toTexture, flipY, mirroredDisplay })

Description

Configures the pipeline module that draws the camera feed to the canvas.

Parameters

Parameter Description
vertexSource [Optional] The vertex shader source to use for rendering.
fragmentSource [Optional] The fragment shader source to use for rendering.
toTexture [Optional] A WebGlTexture to draw to. If no texture is provided, drawing will be to the canvas.
flipY [Optional] If true, flip the rendering upside-down.
mirroredDisplay [Optional] If true, flip the rendering left-right.

Example

const purpleShader = 
  // Purple.
  ` precision mediump float;
    varying vec2 texUv;
    uniform sampler2D sampler;
    void main() {
      vec4 c = texture2D(sampler, texUv);
      float y = dot(c.rgb, vec3(0.299, 0.587, 0.114));
      vec3 p = vec3(.463, .067, .712);
      vec3 p1 = vec3(1.0, 1.0, 1.0) - p;
      vec3 rgb = y < .25 ? (y * 4.0) * p : ((y - .25) * 1.333) * p1 + p;
      gl_FragColor = vec4(rgb, c.a);
    }`

XR8.GlTextureRenderer.configure({fragmentSource: purpleShader})

XR8.GlTextureRenderer.create()

XR8.GlTextureRenderer.create({ GLctx, vertexSource, fragmentSource, toTexture, flipY, mirroredDisplay })

Description

Creates an object for rendering from a texture to a canvas or another texture.

Parameters

Parameter Description
GLctx The WebGlRenderingContext (or WebGl2RenderingContext) to use for rendering. If no toTexture is specified, content will be drawn to this context's canvas.
vertexSource [Optional] The vertex shader source to use for rendering.
fragmentSource [Optional] The fragment shader source to use for rendering.
toTexture [Optional] A WebGlTexture to draw to. If no texture is provided, drawing will be to the canvas.
flipY [Optional] If true, flip the rendering upside-down.
mirroredDisplay [Optional] If true, flip the rendering left-right.

Returns

Returns an object: {render, destroy, shader}

Property Description
render({ renderTexture, viewport }) A function that renders the renderTexture to the specified viewport. Depending on if toTexture is supplied, the viewport is either on the canvas that created GLctx, or it's relative to the render texture provided.
destroy Clean up resources associated with this GlTextureRenderer.
shader Gets a handle to the shader being used to draw the texture.

The render function has the following parameters:

Parameter Description
renderTexture A WebGlTexture (source) to draw.
viewport The region of the canvas or output texture to draw to; this can be constructed manually, or using GlTextureRenderer.fillTextureViewport().

The viewport is specified by { width, height, offsetX, offsetY } :

Property Description
width The width (in pixels) to draw.
height The height (in pixels) to draw.
offsetX [Optional] The minimum x-coordinate (in pixels) to draw to.
offsetY [Optional] The minimum y-coordinate (in pixels) to draw to.

XR8.GlTextureRenderer.fillTextureViewport()

XR8.GlTextureRenderer.fillTextureViewport(srcWidth, srcHeight, destWidth, destHeight)

Description

Convenience method for getting a Viewport struct that fills a texture or canvas from a source without distortion. This is passed to the render method of the object created by GlTextureRenderer.create()

Parameters

Parameter Description
srcWidth The width of the texture you are rendering.
srcHeight The height of the texture you are rendering.
destWidth The width of the render target.
destHeight The height of the render target.

Returns

An object: { width, height, offsetX, offsetY }

Property Description
width The width (in pixels) to draw.
height The height (in pixels) to draw.
offsetX The minimum x-coordinate (in pixels) to draw to.
offsetY The minimum y-coordinate (in pixels) to draw to.

XR8.GlTextureRenderer.getGLctxParameters()

XR8.GlTextureRenderer.getGLctxParameters(GLctx, textureUnit)

Description

Gets the current set of WebGL bindings so that they can be restored later.

Parameters

Parameter Description
GLctx The WebGLRenderingContext or WebGL2RenderingContext to get bindings from.
textureunits The texture units to preserve state for, e.g. [GLctx.TEXTURE0]

Returns

A struct to pass to setGLctxParameters.

Example

const restoreParams = XR8.GlTextureRenderer.getGLctxParameters(GLctx, [GLctx.TEXTURE0])
// Alter context parameters as needed
...
XR8.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)
// Context parameters are restored to their previous state

XR8.GlTextureRenderer.pipelineModule()

XR8.GlTextureRenderer.pipelineModule({ vertexSource, fragmentSource, toTexture, flipY })

Description

Creates a pipeline module that draws the camera feed to the canvas.

Parameters

Parameter Description
vertexSource [Optional] The vertex shader source to use for rendering.
fragmentSource [Optional] The fragment shader source to use for rendering.
toTexture [Optional] A WebGlTexture to draw to. If no texture is provided, drawing will be to the canvas.
flipY [Optional] If true, flip the rendering upside-down.

Returns

Return value is an object {viewport, shader} made available to onProcessCpu and onUpdate as:

processGpuResult.gltexturerenderer with the following properties:

Property Description
viewport The region of the canvas or output texture to draw to; this can be constructed manually, or using GlTextureRenderer.fillTextureViewport().
shader A handle to the shader being used to draw the texture.

processGpuResult.gltexturerenderer.viewport: { width, height, offsetX, offsetY }

Property Description
width The width (in pixels) to draw.
height The height (in pixels) to draw.
offsetX The minimum x-coordinate (in pixels) to draw to.
offsetY The minimum y-coordinate (in pixels) to draw to.

Example

XR8.addCameraPipelineModule(XR8.GlTextureRenderer.pipelineModule())
XR8.addCameraPipelineModule({
  name: 'mycamerapipelinemodule',
  onProcessCpu: ({ processGpuResult }) => {
    const {viewport, shader} = processGpuResult.gltexturerenderer
    if (!viewport) {
      return
    }
    const { width, height, offsetX, offsetY } = viewport

    // ...
  },

XR8.GlTextureRenderer.setGLctxParameters()

XR8.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)

Description

Restores the WebGL bindings that were saved with getGLctxParameters.

Parameters

Parameter Description
GLctx The WebGLRenderingContext or WebGL2RenderingContext to restore bindings on.
restoreParams The output of getGLctxParameters.

Example

const restoreParams = XR8.GlTextureRenderer.getGLctxParameters(GLctx, [GLctx.TEXTURE0])
// Alter context parameters as needed
...
XR8.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)
// Context parameters are restored to their previous state

XR8.GlTextureRenderer.setTextureProvider()

XR8.GlTextureRenderer.setTextureProvider(({ frameStartResult, processGpuResult, processCpuResult }) => {} )

Description

Sets a provider that passes the texture to draw. This should be a function that take the same inputs as cameraPipelineModule.onUpdate.

Parameters

setTextureProvider() takes a function with the following parameters:

Parameter Description
frameStartResult The data that was provided at the beginning of a frame.
processGpuResult Data returned by all installed modules during onProcessGpu.
processCpuResult Data returned by all installed modules during onProcessCpu.

Example

XR8.GlTextureRenderer.setTextureProvider(
  ({processGpuResult}) => {
    return processGpuResult.camerapixelarray ? processGpuResult.camerapixelarray.srcTex : null
  })

XR8.MediaRecorder

Description

Provides a camera pipeline module that allows you to record a video in MP4 format.

Functions

Function Description
configure Configure video recording settings.
pipelineModule Creates a pipeline module that records video in MP4 format.
recordVideo Start recording.
requestMicrophone Enables recording of audio (if not enabled automatically), requesting permissions if needed.
stopRecording Stop recording.
RequestMicOptions Enum for whether or not to automatically request microphone permissions.

XR8.MediaRecorder.configure()

XR8.MediaRecorder.configure({ coverImageUrl, enableEndCard, endCardCallToAction, footerImageUrl, foregroundCanvas, maxDurationMs, maxDimension, shortLink, configureAudioOutput, audioContext, requestMic })

Description

Configures various MediaRecorder parameters.

Parameters

Parameter Default Description
coverImageUrl [Optional] cover image configured in project, null otherwise Image source for cover image.
enableEndCard [Optional] false If true, enable end card.
endCardCallToAction [Optional] 'Try it at: ' Sets the text string for call to action.
footerImageUrl [Optional] null img src for cover image.
foregroundCanvas [Optional] null The canvas to use as a foreground in the recorded video.
maxDurationMs [Optional] 15000 Maximum duration of video, in milliseconds.
maxDimension [Optional] 1280 Max dimension of the captured recording, in pixels.
shortLink [Optional] 8th.io shortlink from project dashboard Sets the text string for shortlink.
configureAudioOutput [Optional] null User provided function that will receive the microphoneInput and audioProcessor audio nodes for complete control of the recording's audio. The nodes attached to the audio processor node will be part of the recording's audio. It is required to return the end node of the user's audio graph.
audioContext [Optional] null User provided AudioContext instance. Engines like THREE.js and BABYLON.js have their own internal audio instance. In order for the recordings to contains sounds defined in those engines, you'll want to provide their AudioContext instance.
requestMic [Optional] 'auto' Determines when the audio permissions are requested. The options are provided in XR8.MediaRecorder.RequestMicOptions.

The function passed to configureAudioOutput takes an object with the following parameters:

Parameter Description
microphoneInput A GainNode that contains the user’s mic input. If the user’s permissions are not accepted, then this node won’t output the mic input but will still be present.
audioProcessor a ScriptProcessorNode that passes audio data to the recorder. If you want an audio node to be part of the recording’s audio output, then you must connect it to the audioProcessor.

Example

XR8.MediaRecorder.configure({
  maxDurationMs: 15000,
  enableEndCard: true,
  endCardCallToAction: 'Try it at:',
  shortLink: '8th.io/my-link',
})

Example - user configured audio output

const userConfiguredAudioOutput = ({microphoneInput, audioProcessor}) => {
  const myCustomAudioGraph = ...
  myCustomAudioSource.connect(myCustomAudioGraph)
  microphoneInput.connect(myCustomAudioGraph)

  // connect audio graph end node to hardware
  myCustomAudioGraph.connect(microphoneInput.context.destination)

  // audio graph will be automatically connected to processor
  return myCustomAudioGraph
}
const threejsAudioContext = THREE.AudioContext.getContext()
XR8.MediaRecorder.configure({
  configureAudioOutput: userConfiguredAudioOutput,
  audioContext: threejsAudioContext,
  requestMic: XR8.MediaRecorder.RequestMicOptions.AUTO,
})

XR8.MediaRecorder.pipelineModule()

XR8.MediaRecorder.pipelineModule()

Description

Provides a camera pipeline module that allows you to record a video in MP4 format.

Parameters

None

Returns

A MediaRecorder pipeline module module allows you to record a video.

Example

XR8.addCameraPipelineModule(XR8.MediaRecorder.pipelineModule())

XR8.MediaRecorder.recordVideo()

XR8.MediaRecorder.recordVideo({ onError, onProcessFrame, onStart, onStop, onVideoReady })

Description

Start recording.

This function takes an object that implements one of more of the following media recorder licecycle callback methods:

Parameters

Parameter Description
onError Callback when there is an error.
onProcessFrame Callback for adding an overlay to the video.
onStart Callback when recording has started.
onStop Callback when recording has stopped.
onVideoReady Callback when recording has completed and video is ready.

Example

XR8.MediaRecorder.recordVideo({
  onVideoReady: (result) => window.dispatchEvent(new CustomEvent('recordercomplete', {detail: result})),
  onStop: () => showLoading(),
  onError: () => clearState(),
  onProcessFrame: ({elapsedTimeMs, maxRecordingMs, ctx}) => {
    // overlay some red text over the video
    ctx.fillStyle = 'red'
    ctx.font = '50px "Nunito"'
    ctx.fillText(`${elapsedTimeMs}/${maxRecordingMs}`, 50, 50)
    const timeLeft =  ( 1 - elapsedTimeMs / maxRecordingMs)
    // update the progress bar to show how much time is left
    progressBar.style.strokeDashoffset = `${100 * timeLeft }`
  }, 
})

XR8.MediaRecorder.requestMicrophone()

XR8.MediaRecorder.requestMicrophone()

Description

Enables recording of audio (if not enabled automatically), requesting permissions if needed.

Returns a promise that lets the client know when the stream is ready. If you begin recording before the audio stream is ready, then you may miss the user's microphone output at the beginning of the recording.

Parameters

None

Example

XR8.MediaRecorder.requestMicrophone()
.then(() => {
  console.log('Microphone requested!')
})
.catch((err) => {
  console.log('Hit an error: ', err)
})

XR8.MediaRecorder.stopRecording()

XR8.MediaRecorder.stopRecording()

Description

Stop recording.

Parameters

None

Example

XR8.MediaRecorder.stopRecording()

XR8.MediaRecorder.RequestMicOptions

Enumeration

Description

Points of the face you can anchor content to.

Properties

Property Value Description
AUTO auto Automatically request microphone permissions in onAttach().
MANUAL manual Microphone permissions are NOT requested in onAttach(). Any other audio added to the app is still recorded if added to the AudioContext and connected to the audioProcessor provided to the user's configureAudioOutput function passed to XR8.MediaRecorder.configure(). You can request microphone permissions manually by calling XR8.MediaRecorder.requestMicrophone().

XR8.PlayCanvas

PlayCanvas (https://www.playcanvas.com/) is an open-source 3D game engine/interactive 3D application engine alongside a proprietary cloud-hosted creation platform that allows for simultaneous editing from multiple computers via a browser-based interface.

Description

Provides an integration that interfaces with the PlayCanvas environment and lifecyle to drive the PlayCanvas camera to do virtual overlays.

Functions

Function Description
runXr Opens the camera and starts running World Tracking and/or Image Tracking in a playcanvas scene.
runFaceEffects Opens the camera and starts running Face Effects in a playcanvas scene.

Getting Started with PlayCanvas

To get started go to https://playcanvas.com/the8thwall and fork one of our sample projects:

Add your App Key

Go to Settings -> External Scripts

The following two scripts should be added added:

https://cdn.8thwall.com/web/xrextras/xrextras.js

https://apps.8thwall.com/xrweb?appKey=XXXXXX

(Note: replace the X's with your own unique App Key obtained from the 8th Wall Console.

Enable "Transparent Canvas"

Go to Settings -> Rendering

Make sure that "Transparent Canvas" is checked

Disable "Prefer WebGL 2.0"

Go to Settings -> Rendering

Make sure that "Prefer WebGL 2.0" is unchecked

Add XRController

NOTE: Only for SLAM and/or Image Target projects. FaceController and XrController cannot be used simultaneously.

The 8th Wall sample PlayCanvas projects are populated with an XRController game object. If you are starting with a blank project, download xrcontroller.js from https://www.github.com/8thwall/web/tree/master/gettingstarted/playcanvas/scripts/ and attach to an Entity in your scene.

Options:

Option Description
disableWorldTracking If true, turn off SLAM tracking for efficiency.
shadowmaterial Material which you want to use as a transparent shadow receiver (e.g. for ground shadows). Typically this material will be used on a "ground" plane entity positioned at (0,0,0)

Add FaceController

NOTE: Only for Face Effects projects. FaceController and XrController cannot be used simultaneously.

The 8th Wall sample PlayCanvas projects are populated with a FaceController game object. If you are starting with a blank project, download facecontroller.js from https://www.github.com/8thwall/web/tree/master/gettingstarted/playcanvas/scripts/ and attach to an Entity in your scene.

Option Description
headAnchor The entity to anchor to the root of the head in world space.

XR8.PlayCanvas.runXr()

XR8.PlayCanvas.runXr( {pcCamera, pcApp}, [extraModules], config )

Description

Opens the camera and starts running XR World Tracking and/or Image Targets in a playcanvas scene.

Parameters

Parameter Description
pcCamera The playcanvas scene camera to drive with AR.
pcApp The playcanvas app, typically this.app.
extraModules [Optional] An optional array of extra pipeline modules to install.
config [Optional] Configuration parameters to pass to XR8.run()

config [Optional] is an object with the following properties:

Property Type Default Description
webgl2 [Optional] bool false If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1.
ownRunLoop [Optional] bool false If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only]
cameraConfig: {direction} [Optional] object {direction: XR8.XrConfig.camera().BACK} Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT
glContextConfig [Optional] WebGLContextAttributes null The attributes to configure the WebGL canvas context.
allowedDevices [Optional] XR8.XrConfig.device() XR8.XrConfig.device().MOBILE Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY, always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE.

Example

var xrcontroller = pc.createScript('xrcontroller')

// Optionally, world tracking can be disabled to increase efficiency when tracking image targets.
xrcontroller.attributes.add('disableWorldTracking', {type: 'boolean'})

xrcontroller.prototype.initialize = function() {
  const disableWorldTracking = this.disableWorldTracking

  // After XR has fully loaded, open the camera feed and start displaying AR.
  const runOnLoad = ({pcCamera, pcApp}, extramodules) => () => {
    XR8.xrController().configure({disableWorldTracking})
    XR8.PlayCanvas.runXr({pcCamera, pcApp}, extramodules)
  }

  // Find the camera in the playcanvas scene, and tie it to the motion of the user's phone in the
  // world.
  const pcCamera = XRExtras.PlayCanvas.findOneCamera(this.entity)

  // While XR is still loading, show some helpful things.
  // Almost There: Detects whether the user's environment can support web ar, and if it doesn't,
  //     shows hints for how to view the experience.
  // Loading: shows prompts for camera permission and hides the scene until it's ready for display.
  // Runtime Error: If something unexpected goes wrong, display an error screen.
  XRExtras.Loading.showLoading({onxrloaded: runOnLoad({pcCamera, pcApp: this.app}, [
    // Optional modules that developers may wish to customize or theme.
    XRExtras.AlmostThere.pipelineModule(),       // Detects unsupported browsers and gives hints.
    XRExtras.Loading.pipelineModule(),           // Manages the loading screen on startup.
    XRExtras.RuntimeError.pipelineModule(),      // Shows an error image on runtime error.
  ])})
}

XR8.PlayCanvas.runFaceEffects()

XR8.PlayCanvas.runFaceEffects( {pcCamera, pcApp}, [extraModules], config )

Description

Opens the camera and starts running XR World Tracking and/or Image Targets in a playcanvas scene.

Parameters

Parameter Description
pcCamera The playcanvas scene camera to drive with AR.
pcApp The playcanvas app, typically this.app.
extraModules [Optional] An optional array of extra pipeline modules to install.
config [Optional] Configuration parameters to pass to XR8.run()

config [Optional] is an object with the following properties:

Property Type Default Description
webgl2 [Optional] bool false If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1.
ownRunLoop [Optional] bool false If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only]
cameraConfig: {direction} [Optional] object {direction: XR8.XrConfig.camera().BACK} Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT
glContextConfig [Optional] WebGLContextAttributes null The attributes to configure the WebGL canvas context.
allowedDevices [Optional] XR8.XrConfig.device() XR8.XrConfig.device().MOBILE Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY, always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE.

PlayCanvas Events

This section describes the events fired by 8th Wall in a PlayCanvas environment.

You can listen for these events in your web application.

Events Emitted

Event Emitted Description
xr:camerastatuschange This event is emitted when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status.
xr:realityerror This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed.
xr:realityready This event is emitted when 8th Wall Web has initialized and at least one frame has been successfully processed. This is the recommended time at which any loading elements should be hidden.
xr:screenshoterror This event is emitted in response to the screenshotrequest resulting in an error.

XrController Events Emitted

Event Emitted Description
xr:screenshotready This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image of the AFrame canvas will be provided.
xr:imageloading This event is emitted when detection image loading begins.
xr:imagescanning This event is emitted when all detection images have been loaded and scanning has begun.
xr:imagefound This event is emitted when an image target is first found.
xr:imageupdated This event is emitted when an image target changes position, rotation or scale.
xr:imagelost This event is emitted when an image target is no longer being tracked.

FaceController Events Emitted

Event Emitted Description
xr:faceloading Fires when loading begins for additional face AR resources.
xr:facescanning Fires when all face AR resources have been loaded and scanning has begun.
xr:facefound Fires when a face is first found.
xr:faceupdated Fires when a face is subsequently found.
xr:facelost Fires when a face is no longer being tracked.

xr:camerastatuschange

Description

This event is fired when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status.

Example:

const handleCameraStatusChange = function handleCameraStatusChange(detail) {
  console.log('status change', detail.status);

  switch (detail.status) {
    case 'requesting':
      // Do something
      break;

    case 'hasStream':
      // Do something
      break;

    case 'failed':
      this.app.fire('xr:realityerror');
      break;
  }
}
this.app.on('xr:camerastatuschange', handleCameraStatusChange, this)

xr:realityerror

Description

This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed.

Example:

this.app.on('xr:realityerror', ({error, isDeviceBrowserSupported, compatibility}) => {
  if (detail.isDeviceBrowserSupported) {
    // Browser is compatible. Print the exception for more information.
    console.log(error)
    return
  }

  // Browser is not compatible. Check the reasons why it may not be in `compatibility`
  console.log(compatibility)
}, this)

xr:realityready

Description

This event is fired when 8th Wall Web has initialized and at least one frame has been successfully processed.

Example:

this.app.on('xr:realityready', () => {
  // Hide loading UI
}, this)

xr:screenshoterror

Description

This event is emitted in response to the xr:screenshotrequest resulting in an error.

Example:

this.app.on('xr:screenshoterror', (detail) => {
  console.log(detail)
  // Handle screenshot error.
}, this)

xr:screenshotready

Description

This event is emitted in response to the xr:screenshotrequest event being being completed successfully. The JPEG compressed image of the PlayCanvas canvas will be provided.

Example:

this.app.on('xr:screenshotready', (event) => {
  // screenshotPreview is an <img> HTML element
  const image = document.getElementById('screenshotPreview')
  image.src = 'data:image/jpeg;base64,' + event.detail
}, this)

PlayCanvas Image Target Events

Image target events can be listened to as this.app.on(event, handler, this).

xr:imageloading: Fires when detection image loading begins.

xr:imageloading : { imageTargets: {name, type, metadata} }

xr:imagescanning: Fires when all detection images have been loaded and scanning has begun.

xr:imagescanning : { imageTargets: {name, type, metadata, geometry} }

xr:imagefound: Fires when an image target is first found.

xr:imagefound : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

xr:imageupdated: Fires when an image target changes position, rotation or scale.

xr:imageupdated : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

xr:imagelost: Fires when an image target is no longer being tracked.

xr:imagelost : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

Example

const showImage = (detail) => {
  if (name != detail.name) { return }
  const {rotation, position, scale} = detail
  entity.setRotation(rotation.x, rotation.y, rotation.z, rotation.w)
  entity.setPosition(position.x, position.y, position.z)
  entity.setLocalScale(scale, scale, scale)
  entity.enabled = true
}

const hideImage = (detail) => {
  if (name != detail.name) { return }
  entity.enabled = false
}

this.app.on('xr:imagefound', showImage, {})
this.app.on('xr:imageupdated', showImage, {})
this.app.on('xr:imagelost', hideImage, {})

PlayCanvas Face Effects Events

Face Effects events can be listened to as this.app.on(event, handler, this).

xr:faceloading: Fires when loading begins for additional face AR resources.

xr:faceloading : {maxDetections, pointsPerDetection, indices, uvs}

xr:facescanning: Fires when all face AR resources have been loaded and scanning has begun.

xr:facescanning: {maxDetections, pointsPerDetection, indices, uvs}

xr:facefound: Fires when a face is first found.

xr:facefound : {id, transform, attachmentPoints, vertices, normals}

xr:faceupdated: Fires when a face is subsequently found.

xr:faceupdated : {id, transform, attachmentPoints, vertices, normals}

xr:facelost: Fires when a face is no longer being tracked.

xr:facelost : {id}

Example

  let mesh = null
  
  // Fires when loading begins for additional face AR resources.
  this.app.on('xr:faceloading', ({maxDetections, pointsPerDetection, indices, uvs}) => {
    const node = new pc.GraphNode();
    const material = this.material.resource;
    mesh = pc.createMesh(
      this.app.graphicsDevice,
      new Array(pointsPerDetection * 3).fill(0.0),  // setting filler vertex positions
      {
        uvs: uvs.map((uv) => [uv.u, uv.v]).flat(),
        indices: indices.map((i) => [i.a, i.b, i.c]).flat()
      }
    );

    const meshInstance = new pc.MeshInstance(node, mesh, material);
    const model = new pc.Model();
    model.graph = node;
    model.meshInstances.push(meshInstance);
    this.entity.model.model = model;
  }, {})
  
  // Fires when a face is subsequently found.
  this.app.on('xr:faceupdated', ({id, transform, attachmentPoints, vertices, normals}) => {
    const {position, rotation, scale, scaledDepth, scaledHeight, scaledWidth} = transform
    
    this.entity.setPosition(position.x, position.y, position.z);
    this.entity.setLocalScale(scale, scale, scale)
    this.entity.setRotation(rotation.x, rotation.y, rotation.z, rotation.w)

    // Set mesh vertices in local space
    mesh.setPositions(vertices.map((vertexPos) => [vertexPos.x, vertexPos.y, vertexPos.z]).flat())
    // Set vertex normals
    mesh.setNormals(normals.map((normal) => [normal.x, normal.y, normal.z]).flat())
    mesh.update()
  }, {})

PlayCanvas Event Listeners

This section describes the events that are listened for by 8th Wall Web in a PlayCanvas environment.

You can fire these events in your web application to perform various actions:

Event Listener Description
xr:hidecamerafeed Hides the camera feed. Tracking does not stop.
xr:recenter Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.
xr:screenshotrequest Emits a request to the engine to capture a screenshot of the PlayCanvas canvas. The engine will emit a xr:screenshotready event with the JPEG compressed image or xr:screenshoterror if an error has occured.
xr:showcamerafeed Shows the camera feed.
xr:stopxr Stop the current XR session. While stopped, the camera feed is stopped and device motion is not tracked.

xr:hidecamerafeed

this.app.fire('xr:hidecamerafeed')

Parameters

None

Description

Hides the camera feed. Tracking does not stop.

Example

this.app.fire('xr:hidecamerafeed')

xr:recenter

this.app.fire('xr:recenter')

Description

Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.

Parameters

Parameter Description
origin: {x, y, z} [Optional] The location of the new origin.
facing: {w, x, y, z} [Optional] A quaternion representing direction the camera should face at the origin.

Example

/*jshint esversion: 6, asi: true, laxbreak: true*/

// taprecenter.js: Defines a playcanvas script that re-centers the AR scene when the screen is
// tapped.

var taprecenter = pc.createScript('taprecenter')

// Fire a 'recenter' event to move the camera back to its starting location in the scene.
taprecenter.prototype.initialize = function() {
  this.app.touch.on(pc.EVENT_TOUCHSTART,
    (event) => { if (event.touches.length !== 1) { return } this.app.fire('xr:recenter')})
}

xr:screenshotrequest

this.app.fire('xr:screenshotrequest')

Parameters

None

Description

Emits a request to the engine to capture a screenshot of the PlayCanvas canvas. The engine will emit a xr:screenshotready event with the JPEG compressed image or xr:screenshoterror if an error has occured.

Example

this.app.on('xr:screenshotready', (event) => {
  // screenshotPreview is an <img> HTML element
  const image = document.getElementById('screenshotPreview')
  image.src = 'data:image/jpeg;base64,' + event.detail
}, this)

this.app.on('xr:screenshoterror', (detail) => {
  console.log(detail)
  // Handle screenshot error.
}, this)

this.app.fire('xr:screenshotrequest')

xr:showcamerafeed

this.app.fire('xr:showcamerafeed')

Parameters

None

Description

Shows the camera feed.

Example

this.app.fire('xr:showcamerafeed')

xr:stopxr

this.app.fire('xr:stopxr')

Parameters

None

Description

Stop the current XR session. While stopped, the camera feed is stopped and device motion is not tracked.

Example

this.app.fire('xr:stopxr')

XR8.Sumerian

Amazon Sumerian lets you create VR, AR, and 3D applications quickly and easily. For more information on Sumerian, please see https://aws.amazon.com/sumerian/

Adding 8th Wall Web to Sumerian

Please refer to the following URL for a getting stared guide on using 8th Wall Web with Amazon Sumerian:

https://github.com/8thwall/web/tree/master/gettingstarted/xrsumerian

Functions

Function Description
addXRWebSystem Adds a custom Sumerian System using XrController to the provided Sumerian world.
addFaceEffectsWebSystem Adds a custom Sumerian System using FaceController to the provided Sumerian world.

XR8.Sumerian.addXRWebSystem()

XR8.Sumerian.addXRWebSystem()

Description

Adds a custom Sumerian System to the provided Sumerian world. If the given world is already running (i.e. in a {World#STATE_RUNNING} state), this system will start itself. Otherwise, it will wait for the world to start before running. When starting, this system will attach to the camera in the scene, modify it's position, and render the camera feed to the background. The given Sumerian world must only contain one camera.

Parameters

Parameter Description
world The Sumerian world that corresponds to the loaded scene.
config [Optional] Configuration parameters to pass to XR8.run()

config [Optional] is an object with the following properties:

Property Type Default Description
webgl2 [Optional] bool false If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1.
ownRunLoop [Optional] bool true If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only]
cameraConfig: {direction} [Optional] object {direction: XR8.XrConfig.camera().BACK} Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT
glContextConfig [Optional] WebGLContextAttributes null The attributes to configure the WebGL canvas context.
allowedDevices [Optional] XR8.XrConfig.device() XR8.XrConfig.device().MOBILE Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY, always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE.

Example

window.XR8.Sumerian.addXRWebSystem(world)

XR8.Sumerian.addFaceEffectsWebSystem()

XR8.Sumerian.addFaceEffectsWebSystem()

Description

Adds a custom Sumerian System to the provided Sumerian world. If the given world is already running (i.e. in a {World#STATE_RUNNING} state), this system will start itself. Otherwise, it will wait for the world to start before running. When starting, this system will attach to the camera in the scene, modify it's position, and render the camera feed to the background. The given Sumerian world must only contain one camera.

Parameters

Parameter Description
world The Sumerian world that corresponds to the loaded scene.
config [Optional] Configuration parameters to pass to XR8.run()

config [Optional] is an object with the following properties:

Property Type Default Description
webgl2 [Optional] bool false If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1.
ownRunLoop [Optional] bool true If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only]
cameraConfig: {direction} [Optional] object {direction: XR8.XrConfig.camera().BACK} Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT
glContextConfig [Optional] WebGLContextAttributes null The attributes to configure the WebGL canvas context.
allowedDevices [Optional] XR8.XrConfig.device() XR8.XrConfig.device().MOBILE Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY, always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE.

Example

window.XR8.Sumerian.addFaceEffectsWebSystem(world)

Sumerian Events

This section describes the events emitted when using 8th Wall Web with Amazon Sumerian

You can listen for these events in your web application call a function to handle the event.

Events Emitted

Event Emitted Description
camerastatuschange This event is emitted when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status.
screenshoterror This event is emitted in response to the screenshotrequest resulting in an error.
screenshotready This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image will be provided.
xrerror This event is emitted when an error has occured when initializing 8th Wall Web.
xrready This event is emitted when 8th Wall Web has initialized and at least one frame has been successfully processed.

XrController Events Emitted

Event Emitted Description
xrimageloading This event is emitted when detection image loading begins.
xrimagescanning This event is emitted when all detection images have been loaded and scanning has begun.
xrimagefound This event is emitted when an image target is first found.
xrimageupdated This event is emitted when an image target changes position, rotation or scale.
xrimagelost This event is emitted when an image target is no longer being tracked.

FaceController Events Emitted

Event Emitted Description
xrfaceloading Fires when loading begins for additional face AR resources.
xrfacescanning Fires when all face AR resources have been loaded and scanning has begun.
xrfacefound Fires when a face is first found.
xrfaceupdated Fires when a face is subsequently found.
xrfacelost Fires when a face is no longer being tracked.

camerastatuschange (Sumerian)

Description

This event is emitted when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status.

Example:

var handleCameraStatusChange = function handleCameraStatusChange(data) {
  console.log('status change', data.status);

  switch (data.status) {
    case 'requesting':
      // Do something
      break;

    case 'hasStream':
      // Do something
      break;

    case 'failed':
      // Do something
      break;
  }
};
window.sumerian.SystemBus.addListener('camerastatuschange', handleCameraStatusChange)

screenshoterror (Sumerian)

Description

This event is emitted in response to the screenshotrequest resulting in an error.

Example:

window.sumerian.SystemBus.addListener('screenshoterror', (data) => {
  console.log(event.detail)
  // Handle screenshot error.
})

screenshotready (Sumerian)

Description

This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image of the Sumerian canvas will be provided.

Example:

window.sumerian.SystemBus.addListener('screenshotready', (data) => {
    // screenshotPreview is an <img> HTML element
    const image = document.getElementById('screenshotPreview')
    image.src = 'data:image/jpeg;base64,' + data
  })

xrerror

Description

This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed.

Example:

window.sumerian.SystemBus.addListener('xrerror', (data) => {
    if (XR8.XrDevice.isDeviceBrowserCompatible) {
      // Browser is compatible. Print the exception for more information.
      console.log(data.error)
      return
    }

    // Browser is not compatible. Check the reasons why it may not be.
    for (let reason of XR8.XrDevice.incompatibleReasons()) {
      // Handle each XR8.XrDevice.IncompatibleReason
    }
  })

xrready

Description

This event is emitted when 8th Wall Web has initialized and at least one frame has been successfully processed. This is the recommended time at which any loading elements should be hidden.

Example:

window.sumerian.SystemBus.addListener('xrready', () => {
  // Hide loading UI
})

xrimageloading (Sumerian)

Description

This event is emitted when detection image loading begins.

imageloading.detail : { imageTargets: {name, type, metadata} }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.
metadata User metadata.

xrimagescanning (Sumerian)

Description

This event is emitted when all detection images have been loaded and scanning has begun.

imagescanning.detail : { imageTargets: {name, type, metadata, geometry} }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.
metadata User metadata.
geometry Object containing geometry data. If type=FLAT: {scaledWidth, scaledHeight}, lse if type=CYLINDRICAL or type=CONICAL: {height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians}

If type = FLAT, geometry:

Property Description
scaledWidth The width of the image in the scene, when multiplied by scale.
scaledHeight The height of the image in the scene, when multiplied by scale.

If type= CYLINDRICAL or CONICAL, geometry:

Property Description
height Height of the curved target.
radiusTop Radius of the curved target at the top.
radiusBottom Radius of the curved target at the bottom.
arcStartRadians Starting angle in radians.
arcLengthRadians Central angle in radians.

xrimagefound (Sumerian)

Description

This event is emitted when an image target is first found.

imagefound.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.`
position: {x, y, z} The 3d position of the located image.
rotation: {w, x, y, z} The 3d local orientation of the located image.
scale A scale factor that should be applied to object attached to this image.

If type = FLAT:

Property Description
scaledWidth The width of the image in the scene, when multiplied by scale.
scaledHeight The height of the image in the scene, when multiplied by scale.

If type= CYLINDRICAL or CONICAL:

Property Description
height Height of the curved target.
radiusTop Radius of the curved target at the top.
radiusBottom Radius of the curved target at the bottom.
arcStartRadians Starting angle in radians.
arcLengthRadians Central angle in radians.

xrimageupdated (Sumerian)

Description

This event is emitted when an image target changes position, rotation or scale.

imageupdated.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.`
position: {x, y, z} The 3d position of the located image.
rotation: {w, x, y, z} The 3d local orientation of the located image.
scale A scale factor that should be applied to object attached to this image.

If type = FLAT:

Property Description
scaledWidth The width of the image in the scene, when multiplied by scale.
scaledHeight The height of the image in the scene, when multiplied by scale.

If type= CYLINDRICAL or CONICAL:

Property Description
height Height of the curved target.
radiusTop Radius of the curved target at the top.
radiusBottom Radius of the curved target at the bottom.
arcStartRadians Starting angle in radians.
arcLengthRadians Central angle in radians.

xrimagelost (Sumerian)

Description

This event is emitted when an image target is no longer being tracked.

imagelost.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.`
position: {x, y, z} The 3d position of the located image.
rotation: {w, x, y, z} The 3d local orientation of the located image.
scale A scale factor that should be applied to object attached to this image.

If type = FLAT:

Property Description
scaledWidth The width of the image in the scene, when multiplied by scale.
scaledHeight The height of the image in the scene, when multiplied by scale.

If type= CYLINDRICAL or CONICAL:

Property Description
height Height of the curved target.
radiusTop Radius of the curved target at the top.
radiusBottom Radius of the curved target at the bottom.
arcStartRadians Starting angle in radians.
arcLengthRadians Central angle in radians.

xrfaceloading (Sumerian)

Description

Fires when loading begins for additional face AR resources.

xrfaceloading : {maxDetections, pointsPerDetection, indices, uvs}

Example

window.sumerian.SystemBus.addListener(
  'xrfaceloading',
  ({maxDetections, pointsPerDetection, indices, uvs}) => {
})

xrfacescanning (Sumerian)

Description

Fires when all face AR resources have been loaded and scanning has begun.

xrfacescanning : {maxDetections, pointsPerDetection, indices, uvs}

Example

window.sumerian.SystemBus.addListener(
  'xrfacescanning',
  ({maxDetections, pointsPerDetection, indices, uvs}) => {
})

xrfacefound (Sumerian)

Description

Fires when a face first found.

xrfacefound : {id, transform, attachmentPoints, vertices, normals}

Example

window.sumerian.SystemBus.addListener(
  'xrfacefound',
  ({id, transform, attachmentPoints, vertices, normals}) => {
})

xrfaceupdated (Sumerian)

Description

Fires when a face is subsequently found.

xrfaceupdated : {id, transform, attachmentPoints, vertices, normals}

Example

window.sumerian.SystemBus.addListener(
  'xrfaceupdated',
  ({id, transform, attachmentPoints, vertices, normals}) => {
})

xrfacelost (Sumerian)

Description

Fires when a face is subsequently found.

xrfacelost : {id}

Example

window.sumerian.SystemBus.addListener(
  'xrfacelost',
  ({id}) => {
})

Sumerian Event Listeners

This section describes the events that are listened for by the Sumerian module in 8th Wall Web.

You can emit these events in your web application to perform various actions:

Event Listener Description
recenter Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.
screenshotrequest Emits a request to the engine to capture a screenshot of the Sumerian canvas. The engine will emit a screenshotready event with the JPEG compressed image or screenshoterror if an error has occured.

recenter (Sumerian)

window.sumerian.SystemBus.emit('recenter', {origin, facing})

Description

Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.

Parameters

Parameter Description
origin: {x, y, z} [Optional] The location of the new origin.
facing: {w, x, y, z} [Optional] A quaternion representing direction the camera should face at the origin.

Example

window.sumerian.SystemBus.emit('recenter')

// OR

window.sumerian.SystemBus.emit('recenter', {
  origin: { x: 1, y: 4, z: 0 },
  facing: { w: 0.9856, x: 0, y: 0.169, z: 0 }
})

screenshotrequest (Sumerian)

window.sumerian.SystemBus.emit('screenshotrequest')

Parameters

None

Description

Emits a request to the engine to capture a screenshot of the Sumerian canvas. The engine will emit a screenshotready event with the JPEG compressed image or screenshoterror if an error has occured.

Example

const photoButton = document.getElementById('photoButton')

// Emit screenshotrequest when user taps
photoButton.addEventListener('click', () => {
  image.src = ""
  window.sumerian.SystemBus.emit('screenshotrequest')
})

window.sumerian.SystemBus.addListener('screenshotready', event => {
  image.src = 'data:image/jpeg;base64,' + event.detail
})

window.sumerian.SystemBus.addListener('screenshoterror', event => {
  console.log("error")
})

XR8.Threejs

Description

Provides a camera pipeline module that drives three.js camera to do virtual overlays.

Functions

Function Description
pipelineModule A pipeline module that interfaces with the threejs environment and lifecyle.
xrScene Get a handle to the xr scene, camera and renderer.

XR8.Threejs.pipelineModule()

XR8.Threejs.pipelineModule()

Description

A pipeline module that interfaces with the threejs environment and lifecyle. The threejs scene can be queried using Threejs.xrScene() after Threejs.pipelineModule()'s onStart method is called. Setup can be done in another pipeline module's onStart method by referring to Threejs.xrScene() as long as XR8.addCameraPipelineModule is called on the second module after calling XR8.addCameraPipelineModule(Threejs.pipelineModule()).

  • onStart, a threejs renderer and scene are created and configured to draw over a camera feed.
  • onUpdate, the threejs camera is driven with the phone's motion.
  • onRender, the renderer's render() method is invoked.

Note that this module does not actually draw the camera feed to the canvas, GlTextureRenderer does that. To add a camera feed in the background, install the GlTextureRenderer.pipelineModule() before installing this module (so that it is rendered before the scene is drawn).

Parameters

None

Returns

A Threejs pipeline module that can be added via XR8.addCameraPipelineModule().

Example

// Add XrController.pipelineModule(), which enables 6DoF camera motion estimation.
XR8.addCameraPipelineModule(XR8.XrController.pipelineModule())

// Add a GlTextureRenderer which draws the camera feed to the canvas.
XR8.addCameraPipelineModule(XR8.GlTextureRenderer.pipelineModule())

// Add Threejs.pipelineModule() which creates a threejs scene, camera, and renderer, and
// drives the scene camera based on 6DoF camera motion.
XR8.addCameraPipelineModule(XR8.Threejs.pipelineModule())

// Add custom logic to the camera loop. This is done with camera pipeline modules that provide
// logic for key lifecycle moments for processing each camera frame. In this case, we'll be
// adding onStart logic for scene initialization, and onUpdate logic for scene updates.
XR8.addCameraPipelineModule({
  // Camera pipeline modules need a name. It can be whatever you want but must be unique
  // within your app.
  name: 'myawesomeapp',

  // onStart is called once when the camera feed begins. In this case, we need to wait for the
  // XR8.Threejs scene to be ready before we can access it to add content.
  onStart: ({canvasWidth, canvasHeight}) => {
    // Get the 3js scene. This was created by XR8.Threejs.pipelineModule().onStart(). The
    // reason we can access it here now is because 'myawesomeapp' was installed after
    // XR8.Threejs.pipelineModule().
    const {scene, camera} = XR8.Threejs.xrScene()

    // Add some objects to the scene and set the starting camera position.
    myInitXrScene({scene, camera})

    // Sync the xr controller's 6DoF position and camera paremeters with our scene.
    XR8.XrController.updateCameraProjectionMatrix({
      origin: camera.position,
      facing: camera.quaternion,
    })
  },

  // onUpdate is called once per camera loop prior to render. Any 3js geometry scene would
  // typically happen here.
  onUpdate: () => {
    // Update the position of objects in the scene, etc.
    updateScene(XR8.Threejs.xrScene())
  },
})

XR8.Threejs.xrScene()

XR8.Threejs.xrScene()

Description

Get a handle to the xr scene, camera and renderer.

Parameters

None

Returns

An object: { scene, camera, renderer }

Property Description
scene The Threejs scene.
camera The Threejs main camera.
renderer The Threejs renderer.

Example

const {scene, camera, renderer} = XR8.Threejs.xrScene()

XR8.XrConfig

Description

Utilities for specifying class of devices and cameras that pipeline modules should run on.

Properties

Property Type Description
camera() Enum Desired camera to use.
device() Enum Specify the class of devices that the pipeline should run on.

XR8.XrConfig.camera()

Enumeration

Description

Desired camera to use.

Properties

Property Value Description
FRONT front Use the front facing / selfie camera.
BACK back Use the rear facing / back camera.

XR8.XrConfig.device()

Enumeration

Description

Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY, always open the camera.

Note: World tracking can only be used with XR8.XrConfig.device().MOBILE.

Properties

Property Value Description
MOBILE mobile Restrict camera pipeline on mobile-class devices, for example phones and tablets.
ANY any Start running camera pipeline without checking device capabilities. This may fail at some point in the pipeline startup if a required sensor is not available at run time (for example, a laptop has no camera).

XR8.XrController

Description

XrController provides 6DoF camera tracking and interfaces for configuring tracking.

Functions

Function Description
configure Configures what processing is performed by XrController (may have performance implications).
hitTest Estimate the 3D position of a point on the camera feed.
pipelineModule Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started, camera proessing events, and other state changes. These are used to calculate the camera's position.
recenter Repositions the camera to the origin / facing direction specified by updateCameraProjectionMatrix and restart tracking.
updateCameraProjectionMatrix Reset the scene's display geometry and the camera's starting position in the scene. The display geometry is needed to properly overlay the position of objects in the virtual scene on top of their corresponding position in the camera image. The starting position specifies where the camera will be placed and facing at the start of a session.

XR8.XrController.configure()

XrController.configure({ enableWorldPoints, enableLighting, disableWorldTracking, imageTargets: [] })

Description

Configures the processing performed by XrController (may have performance implications).

Parameters

Parameter Description
enableLighting [Optional] If true, lighting will be provided by XrController.pipelineModule() as processCpuResult.reality.lighting
enableWorldPoints [Optional] If true, worldPoints will be provided by XrController.pipelineModule() as processCpuResult.reality.worldPoints.
disableWorldTracking [Optional] If true, turn off SLAM tracking for efficiency. This needs to be done BEFORE XR8.Run() is called.
imageTargets [Optional] List of names of the image target to detect. Can be modified at runtime. Note: All currently active image targets will be replaced with the ones specified in this list.
leftHandedAxes [Optional] If true, use left-handed coordinates. Default is false
mirroredDisplay [Optional] If true, flip left and right in the output.

IMPORTANT: disableWorldTracking: true needs to be set BEFORE both XR8.XrController.pipelineModule() and XR8.Run() are called.

Example

XR8.XrController.configure({ enableLighting: true, enableWorldPoints: true, disableWorldTracking: false })

Example - Disable world tracking

// Disable world tracking (SLAM)
XR8.XrController.configure({disableWorldTracking: true})
// Open the camera and start running the camera run loop
// In index.html: <canvas id="camerafeed"></canvas>
XR8.run({canvas: document.getElementById('camerafeed')})

Example - Change active image target set

XR8.XrController.configure({imageTargets: ['image-target1', 'image-target2', 'image-target3']})

XR8.XrController.hitTest()

XrController.hitTest(X, Y, includedTypes = [])

Description

Estimate the 3D position of a point on the camera feed. X and Y are specified as numbers between 0 and 1, where (0, 0) is the upper left corner and (1, 1) is the lower right corner of the camera feed as rendered in the camera that was specified by updateCameraProjectionMatrix. Mutltiple 3d position esitmates may be returned for a single hit test based on the source of data being used to estimate the position. The data source that was used to estimate the position is indicated by the hitTest.type.

Parameters

Parameter Description
X Value between 0 and 1 that represents the horizontal position on camera feed from left to right.
Y Value between 0 and 1 that represents the vertical position on camera feed from top to bottom.
includedTypes List of one or more of: 'FEATURE_POINT', 'ESTIMATED_SURFACE' or 'DETECTED_SURFACE'. Note: Currently only 'FEATURE_POINT' is supported.

Returns

An array of estimated 3D positions from the hit test:

[{ type, position, rotation, distance }]

Property Description
type One of 'FEATURE_POINT', 'ESTIMATED_SURFACE', 'DETECTED_SURFACE', or 'UNSPECIFIED'
position: {x, y, z} The estimated 3D position of the queried point on the camera feed.
rotation: {x, y, z, w} The estimated 3D rotation of the queried point on the camera feed.
distance The estimated distance from the device of the queried point on the camera feed.

Example

const hitTestHandler = (e) => {
  const x = e.touches[0].clientX / window.innerWidth
  const y = e.touches[0].clientY / window.innerHeight
  const hitTestResults = XR8.XrController.hitTest(x, y, ['FEATURE_POINT'])
}

XR8.XrController.pipelineModule()

XR8.XrController.pipelineModule()

Parameters

None

Description

Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started, camera proessing events, and other state changes. These are used to calculate the camera's position.

Returns

Return value is an object made available to onUpdate as:

processCpuResult.reality: { rotation, position, intrinsics, trackingStatus, trackingReason, worldPoints, realityTexture, lighting }

Property Description
rotation: {w, x, y, z} The orientation (quaternion) of the camera in the scene.
position: {x, y, z} The position of the camera in the scene.
intrinsics A column-major 4x4 projection matrix that gives the scene camera the same field of view as the rendered camera feed.
trackingStatus One of 'UNSPECIFIED', 'NOT_AVAILABLE', 'LIMITED' or 'NORMAL'.
trackingReason One of 'UNSPECIFIED', 'INITIALIZING', 'RELOCALIZING', 'TOO_MUCH_MOTION' or 'NOT_ENOUGH_TEXTURE'.
worldPoints: [{id, confidence, position: {x, y, z}}] An array of detected points in the world at their location in the scene. Only filled if XrController is configured to return world points and trackingReason != INITIALIZING.
realityTexture The WebGLTexture containing camera feed data.
lighting: {exposure, temperature} Exposure of the lighting in your environment. Note: temperature has not yet been implemented.

Dispatched Events

imageloading: Fires when detection image loading begins.

imageloading.detail : { imageTargets: {name, type, metadata} }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.
metadata User metadata.

imagescanning: Fires when all detection images have been loaded and scanning has begun.

imagescanning.detail : { imageTargets: {name, type, metadata, geometry} }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.
metadata User metadata.
geometry Object containing geometry data. If type=FLAT: {scaledWidth, scaledHeight}, lse if type=CYLINDRICAL or type=CONICAL: {height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians}

If type = FLAT, geometry:

Property Description
scaledWidth The width of the image in the scene, when multiplied by scale.
scaledHeight The height of the image in the scene, when multiplied by scale.

If type= CYLINDRICAL or CONICAL, geometry:

Property Description
height Height of the curved target.
radiusTop Radius of the curved target at the top.
radiusBottom Radius of the curved target at the bottom.
arcStartRadians Starting angle in radians.
arcLengthRadians Central angle in radians.

imagefound: Fires when an image target is first found.

imagefound.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.`
position: {x, y, z} The 3d position of the located image.
rotation: {w, x, y, z} The 3d local orientation of the located image.
scale A scale factor that should be applied to object attached to this image.

If type = FLAT:

Property Description
scaledWidth The width of the image in the scene, when multiplied by scale.
scaledHeight The height of the image in the scene, when multiplied by scale.

If type= CYLINDRICAL or CONICAL:

Property Description
height Height of the curved target.
radiusTop Radius of the curved target at the top.
radiusBottom Radius of the curved target at the bottom.
arcStartRadians Starting angle in radians.
arcLengthRadians Central angle in radians.

imageupdated: Fires when an image target changes position, rotation or scale.

imageupdated.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.`
position: {x, y, z} The 3d position of the located image.
rotation: {w, x, y, z} The 3d local orientation of the located image.
scale A scale factor that should be applied to object attached to this image.

If type = FLAT:

Property Description
scaledWidth The width of the image in the scene, when multiplied by scale.
scaledHeight The height of the image in the scene, when multiplied by scale.

If type= CYLINDRICAL or CONICAL:

Property Description
height Height of the curved target.
radiusTop Radius of the curved target at the top.
radiusBottom Radius of the curved target at the bottom.
arcStartRadians Starting angle in radians.
arcLengthRadians Central angle in radians.

imagelost: Fires when an image target is no longer being tracked.

imagelost.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }

Property Description
name The image's name.
type One of 'FLAT', 'CYLINDRICAL', 'CONICAL'.`
position: {x, y, z} The 3d position of the located image.
rotation: {w, x, y, z} The 3d local orientation of the located image.
scale A scale factor that should be applied to object attached to this image.

If type = FLAT:

Property Description
scaledWidth The width of the image in the scene, when multiplied by scale.
scaledHeight The height of the image in the scene, when multiplied by scale.

If type= CYLINDRICAL or CONICAL:

Property Description
height Height of the curved target.
radiusTop Radius of the curved target at the top.
radiusBottom Radius of the curved target at the bottom.
arcStartRadians Starting angle in radians.
arcLengthRadians Central angle in radians.

Example - adding pipeline module

XR8.addCameraPipelineModule(XR8.XrController.pipelineModule())

Example - dispatched events

const logEvent = ({name, detail}) => {
  console.log(`Handling event ${name}, got detail, ${JSON.stringify(detail)}`)
}

XR8.addCameraPipelineModule({
  name: 'eventlogger',
  listeners: [
    {event: 'reality.imageloading', process: logEvent },
    {event: 'reality.imagescanning', process: logEvent },
    {event: 'reality.imagefound', process: logEvent},
    {event: 'reality.imageupdated', process: logEvent},
    {event: 'reality.imagelost', process: logEvent},
  ],
})

XR8.XrController.recenter()

XR8.XrController.recenter()

Parameters

None

Description

Repositions the camera to the origin / facing direction specified by updateCameraProjectionMatrix and restart tracking.

XR8.XrController.updateCameraProjectionMatrix()

XR8.XrController.updateCameraProjectionMatrix({ cam, origin, facing })

Description

Reset the scene's display geometry and the camera's starting position in the scene. The display geometry is needed to properly overlay the position of objects in the virtual scene on top of their corresponding position in the camera image. The starting position specifies where the camera will be placed and facing at the start of a session.

Parameters

Parameter Description
cam [Optional] { pixelRectWidth, pixelRectHeight, nearClipPlane, farClipPlane }
origin: { x, y, z } [Optional] The starting position of the camera in the scene.
facing: { w, x, y, z } [Optional] The starting direction (quaternion) of the camera in the scene.

cam has the following parameters:

Parameter Description
pixelRectWidth The width of the canvas that displays the camera feed.
pixelRectHeight The height of the canvas that displays the camera feed.
nearClipPlane The closest distance to the camera at which scene objects are visible.
farClipPlane The farthest distance to the camera at which scene objects are visible.

Example

XR8.XrController.updateCameraProjectionMatrix({
  origin: { x: 1, y: 4, z: 0 },
  facing: { w: 0.9856, x: 0, y: 0.169, z: 0 }
})

XR8.XrDevice

Description

Provides information about device compatibility and characteristics.

Properties

Property Type Description
IncompatibilityReasons Enum The possible reasons for why a device and browser may not be compatible with 8th Wall Web.

Functions

Function Description
deviceEstimate Returns an estimate of the user's device (e.g. make / model) based on user agent string and other factors. This information is only an estimate, and should not be assumed to be complete or reliable.
incompatibleReasons Returns an array of XrDevice.IncompatibilityReasons why the device the device and browser are not supported. This will only contain entries if XR8.XrDevice.isDeviceBrowserCompatible() returns false.
incompatibleReasonDetails Returns extra details about the reasons why the device and browser are incompatible. This information should only be used as a hint to help with further error handling. These should not be assumed to be complete or reliable. This will only contain entries if XR8.XrDevice.isDeviceBrowserCompatible() returns false.
isDeviceBrowserCompatible Returns an estimate of whether the user's device and browser is compatible with 8th Wall Web. If this returns false, XrDevice.incompatibleReasons() will return reasons about why the device and browser are not supported.

XR8.XrDevice.IncompatibilityReasons

Enumeration

Description

The possible reasons for why a device and browser may not be compatible with 8th Wall Web.

Properties

Property Value Description
UNSPECIFIED 0 The incompatible reason is not specified.
UNSUPPORTED_OS 1 The estimated operating system is not supported.
UNSUPPORTED_BROWSER 2 The estimated browser is not supported.
MISSING_DEVICE_ORIENTATION 3 The browser does not support device orientation events.
MISSING_USER_MEDIA 4 The browser does not support user media acccess.
MISSING_WEB_ASSEMBLY 5 The browser does not support web assembly.

XR8.XrDevice.deviceEstimate()

XR8.XrDevice.deviceEstimate()

Description

Returns an estimate of the user's device (e.g. make / model) based on user agent string and other factors. This information is only an estimate, and should not be assumed to be complete or reliable.

Parameters

None

Returns

An object: { locale, os, osVersion, manufacturer, model }

Property Description
locale The user's locale.
os The device's operating system.
osVersion The device's operating system version.
manufacturer The device's manufacturer.
model The device's model.

XR8.XrDevice.incompatibleReasons()

XR8.XrDevice.incompatibleReasons({ allowedDevices })

Description

Returns an array of XR8.XrDevice.IncompatibilityReasons why the device the device and browser are not supported. This will only contain entries if XR8.XrDevice.isDeviceBrowserCompatible() returns false.

Parameters

Parameter Description
allowedDevices [Optional] Supported device classes, a value in XR8.XrConfig.device().

Returns

Returns an array of XrDevice.IncompatibleReasons

Example

const reasons = XR8.XrDevice.incompatibleReasons()
for (let reason of reasons) {
  switch (reason) {
    case XR8.XrDevice.IncompabilityReasons.UNSUPPORTED_OS:
      // Handle unsupported os error messaging.
      break;
    case XR8.XrDevice.IncompabilityReasons.UNSUPPORTED_BROWSER:
       // Handle unsupported browser
       break;
   ...
}

XR8.XrDevice.incompatibleReasonDetails()

XR8.XrDevice.incompatibleReasonDetails({ allowedDevices })

Description

Returns extra details about the reasons why the device and browser are incompatible. This information should only be used as a hint to help with further error handling. These should not be assumed to be complete or reliable. This will only contain entries if XrDevice.isDeviceBrowserCompatible() returns false.

Parameters

Parameter Description
allowedDevices [Optional] Supported device classes, a value in XR8.XrConfig.device().

Returns

An object: { inAppBrowser, inAppBrowserType }

Property Description
inAppBrowser The name of the in-app browser detected (e.g. 'Twitter')
inAppBrowserType A string that helps describe how to handle the in-app browser.

XR8.XrDevice.isDeviceBrowserCompatible()

XR8.XrDevice.isDeviceBrowserCompatible({ allowedDevices })

Description

Returns an estimate of whether the user's device and browser is compatible with 8th Wall Web. If this returns false, XrDevice.incompatibleReasons() will return reasons about why the device and browser are not supported.

Parameters

Parameter Description
allowedDevices [Optional] Supported device classes, a value in XR8.XrConfig.device().

Returns

True or false.

Example

XR8.XrDevice.isDeviceBrowserCompatible({allowedDevices: XR8.XrConfig.device().MOBILE})

XR8.XrPermissions

Description

Utilities for specifying permissions required by a pipeline module.

Modules can indicate what browser capabilities they require that may need permissions requests. These can be used by the framework to request appropriate permissions if absent, or to create components that request the appropriate permissions before running XR.

Properties

Property Type Description
permissions() Enum List of permissions that can be specified as required by a pipeline module.

Example

XR8.addCameraPipelineModule({
  name: 'request-gyro',
  requiredPermissions: () => ([XR8.XrPermissions.permissions().DEVICE_ORIENTATION]),
})

XR8.XrPermissions.permissions()

Enumeration

Description

Permissions that can be required by a pipeline module.

Properties

Property Value Description
CAMERA camera Require camera.
DEVICE_MOTION devicemotion Require accelerometer.
DEVICE_ORIENTATION deviceorientation Require gyro.
MICROPHONE microphone Require microphone.

Example

XR8.addCameraPipelineModule({
  name: 'request-gyro',
  requiredPermissions: () => ([XR8.XrPermissions.permissions().DEVICE_ORIENTATION]),
})