8th Wall enables developers to create, collaborate and publish Web AR experiences that run directly in a mobile web browser.
Built entirely using standards-compliant JavaScript and WebGL, 8th Wall Web is a complete implementation of 8th Wall's Simultaneous Localization and Mapping (SLAM) engine, hyper-optimized for real-time AR on mobile browsers. Features include World Tracking, Image Targets, and Face Effects.
The 8th Wall Cloud Editor allows you to develop fully featured Web AR projects and collaborate with team members in real time. Built-In Hosting allows you to publish projects to multiple deployment states hosted on 8th Wall's reliable and secure global network, including a password-protected staging environment. Self-Hosting is also available.
8th Wall Web is easily integrated into 3D JavaScript frameworks such as:
Quick Start Guide API Reference Need Help?
Tutorial:
8th Wall Web Release 15.2 is now available! This release provides a number of updates and enhancements:
Release 15.2: (2020-December-14, v15.2.4.487)
New Features:
Fixes and Enhancements:
XRExtras Enhancements:
Click Here for to see a full list of changes.
Mobile browsers require the following functionality to support 8th Wall Web experiences:
NOTE: 8th Wall Web experiences must be viewed via https. This is required by browsers for camera access.
This translates to the following compatibility for iOS and Android devices:
iOS:
Apps that use SFSafariViewController web views (iOS 13+)
Apps/Browsers that use WKWebView web views (iOS 14.3+)
Examples:
Android:
Browsers known to natively support the features required for WebAR:
Apps using Web Views known to support the features required for WebAR:
Link-out support
For apps that don’t natively support the features required for WebAR, our XRExtras library provides flows to direct users to the right place, maximizing accessibility of your WebAR projects from these apps.
Examples: WeChat, TikTok
Screenshots:
Launch Browser from Menu (iOS) | Launch Browser from Button (Android) | Copy Link to Clipboard |
---|---|---|
![]() |
![]() |
![]() |
8th Wall Web is easily integrated into 3D JavaScript frameworks such as:
Platform | Lighting | AR Background | Camera Motion | Horizontal Surfaces | Vertical Surfaces | Image Detection & Tracking | World Points | Hit Tests | Face Effects |
---|---|---|---|---|---|---|---|---|---|
8th Wall Web | Yes | Yes | 6 DoF (Scale Free) | Yes, Instant Planar | No | Yes | Yes | Yes | Yes |
This guide provides all of the steps required to get you up and running with the 8th Wall Cloud Editor and Built-in Hosting platform.
Creating an 8th Wall Account gives you the ability to:
New Users: Sign up for a 14-day free trial at https://www.8thwall.com/try-free-trial
Existing Users: Login at https://www.8thwall.com/login using your email address and password.
The 8th Wall Cloud Editor and Built-in Hosting platform are available to workspaces with a paid subscription. 8th Wall offers a 14-day free trial so you can get access to the full power of 8th Wall and begin building WebAR experiences.
At the end of your 14-day free trial, your account will automatically upgrade to a paid plan. You must cancel your free trial before the end of the trial period to avoid charges. There are no refunds or credits for partial or unused months. To manage your subscription settings, please see https://www.8thwall.com/docs/web/#account-settings
Enter payment details and select plan. NOTE: You will NOT be charged anything at this time. You can cancel at any time during the 14-day free trial period to avoid charges.
Review and confirm. Click Start free trial to continue and activate your 14-day free trial.
On the following screen, enter a descriptive workspace name. Most people use their company name
Select a workspace URL. Pick something relevant to your workspace name.
IMPORTANT if you use 8th Wall hosting, this value will be used by default as the sub-domain in your URL (e.g. mycompany.8thwall.app/project-name). You cannot change this later! You do have the ability to connect custom domains later.
Enter Basic info for the project: Please provide a Title, URL, Description (optional) and Cover Image (optional). All of these fields, except URL, can be edited later in the Project Settings page.
Select a Project Type:
Commercial: Commercial projects are intended for commercial use. You can develop unlimited commercial projects with your plan at no additional charge. When you’re ready to launch a commercial project so that the world can see it, you will need to purchase a monthly Commercial License which varies based on views. NOTE: Commercial projects cannot be purchased during a free trial. If you need to purchase a commercial license, you can end your free trial early and begin your paid subscription.
Non-Commercial: Your paid subscription allows you to develop and publish unlimited non-commercial projects. A "Non-Commercial" label will appear on the loading screen. If you decide to commercialize your project at any point, switch to "Commercial" in the Project Dashboard.
Demo Use: You may create unlimited demo projects which are publicly viewable and strictly intended for pitching prospective work. A "Demo Use Only" label will appear on the loading screen. If you decide to commercialize your project at any point, switch to "Commercial" in the Project Dashboard.
Educational Use: Educational projects are intended for educational purposes only, such as a classroom setting. You can develop and publish unlimited educational projects. An "Educational Use" label will appear on the loading screen. If your project is intended for commercial use, you must select "Commercial". If you are an educational institution please 8th Wall for information on a custom Education plan.
At the top of the Cloud Editor window, click the Preview button.
Scan the QR code with your mobile device to open a web browser and look at a live preview of the WebAR project.
When the page loads, you'll be prompted for access to motion and orientation sensors (on some devices) and the camera (all devices). Click Allow for all permission prompts. You will be taken to the private development URL for this project.
When the WebAR preview loads, tap on the groun to spawn trees.
Result:
At this point, you have a fully operational WebAR project and have previewed it on your own device. Next, publish your demo project using 8th Wall's Built-in Hosting so that it can be viewed publicly by anyone on the internet.
Note: Commercial projects require additional commercial licenses. See https://www.8thwall.com/pricing for more info.
At the top right of the Cloud Editor window, click Publish
You will see a list of commits (in this case there is only one - the initial clone) as well as the Development, Staging and Public URLs for the project. Promote both Staging and Public to the first commit in the list by selecting both radio buttons.
Click Publish
Go back to the Project Dashboard in the left nagivation. In the QR 8.code section, the Public project URL will be displayed along with both an 8th.io shortlink and associated QR code.
Scan the QR code with your mobile device to view the Public WebAR experience.
8th Wall has created a number of sample projects that you can clone and use as starting points to help you get started. Please check out:
Cloud Editor & 8th Wall Hosted examples:
Self-Hosted examples:
8th Wall is a complete Web AR solution that allows you to create, collaborate and publish Web AR experiences that run directly in a mobile web browser.
Create an 8th Wall Account to:
New Users: Sign up for a 14-day free trial at https://www.8thwall.com/try-free-trial
Existing Users: Login at https://www.8thwall.com/login using your email address and password.
The 8th Wall homepage, when logged in, provides access to all of your workspaces and recent projects. Select a Workspace or Project to access its dashboard.
Homepage guide:
A Workspace is a logical grouping of Projects, Users, and Billing. Workspaces can contain one or more Users, each with different permissions. Users can belong to multiple Workspaces.
The Workspace dashboard allows you to:
When creating a new 8th Wall account directly from 8thwall.com, you will start with a workspace with a 14-day free trial.
If signing up via an invitation from another 8th Wall user, you will be added as a team member of their existing workspace.
To select a workspace, perform one of the following:
Each Workspace has a team containing one or more Users, each with different permissions. Users can belong to multiple Workspace teams.
Add others to your team to allow them to access the Projects in your workspace. This allows you to collaboratively create, manage, test and publish Web AR projects as a team.
Team members can have one of three roles:
Capabilities for each role:
Capability | OWNER | ADMIN | DEV |
---|---|---|---|
Projects - View | X | X | X |
Projects - Create | X | X | X |
Projects - Edit | X | X | X |
Projects - Delete | X | X | X |
Authorize Devices | X | X | X |
Teams - View Users | X | X | X |
Teams - Invite Users | X | X | |
Teams - Remove Users | X | X | |
Teams - Manage User Roles | X | X | |
Workspaces - Create | X | X | X |
Workspaces - Edit | X | ||
Workspaces - Manage Plans | X | ||
Edit Profile | X | X | X |
Each user in your workspace has a handle. Workspace handles will be the same as the User Handle defined in a user's profile unless already taken or customized by a user.
Handles are used as part of the URL (in the format "handle-client-workspace.dev.8thwall.app") to preview new changes when developing with the 8th Wall Cloud Editor.
Example: tony-default-mycompany.dev.8thwall.app
Important
Modify User Handle
The Account page allows you to:
Please refer to https://www.8thwall.com/pricing for detailed information on plans and pricing.
For licensing inquiries, please contact the 8th Wall team by filling out the form at https://www.8thwall.com/licensing
To Upgrade to an paid plan:
As part of the upgrade process, if you haven't already, you may be asked to select a Workspace Name and Workspace URL:
Workspace URL: This value is used as part of the URL to access your 8th Wall workspace and related resources. It is also used as the subdomain in default URLs to 8th Wall hosted projects. This value is automatically generated from the Workspace Name, but can be customized. This cannot be changed later
NOTE: If you are on a 14-day free trial, at the end of the trial period you account will automatically upgrade to a paid plan. Cancel online before the end of the trial period to avoid being charged for the monthly subscription.
To cancel during Free Trial:
To cancel an existing plan:
Note: You cannot cancel your Pro scription if the workspace has any actice commercial apps. You first need to cancel your commercial licenses (which will take the projects offline) and then you can cancel your Pro subscription.
To update account billing information:
On this page, you can manage your payment methods as well as the billing information you'd like to appear on your invoices.
Payment Methods
The Payment Methods widget allows you to:
Click "Add payment method" to add a new credit card to your account. If you would like this newly added credit card to be used for future bills, make sure to click "Make Default"
Invoice Details
The "Invoice Details" section of the Account page allows you to specify contact information you'd like to appear on future invoices. Update the form with desired info and click Update to save your changes:
Note: Updated payment methods and invoice details will be used in future invoices.
Commercial licenses and their payment methods can be managed from the Account page of your workspace. This section will only be displayed if you have active commercial licenses.
Cancel an active commercial license
IMPORTANT: Cancelling the license for an active commercial project will disable it and the WebAR project can no longer be viewed. This action cannot be undone!
Change payment method for an active commercial license
The Billing Summary section of the Account page allows you to view and download invoices, and make payments for any outstanding invoices. Billing Summary displays:
This section decribes how to create, manage and publish WebAR projects.
From the Homepage (logged in) or Workspace Dashboard, click "Start a new Project"
Select the workspace for this project.
Enter Basic info for the project: Please provide: Title, URL, Description (optional) and Cover Image (optional). All of these fields, except URL, can be edited later in the Project Settings page.
Select a Project Type:
Commercial: Commercial projects are intended for commercial use. You can develop unlimited commercial projects with your plan at no additional charge. When you’re ready to launch a commercial project so that the world can see it, you will need to purchase a monthly Commercial License which varies based on views. NOTE: Commercial projects cannot be purchased during a free trial. If you need to purchase a commercial license, you can end your free trial early and begin your paid subscription.
Non-Commercial: Your paid subscription allows you to develop and publish unlimited non-commercial projects. A "Non-Commercial" label will appear on the loading screen. If you decide to commercialize your project at any point, switch to "Commercial" in the Project Dashboard.
Demo Use: You may create unlimited demo projects which are publicly viewable and strictly intended for pitching prospective work. A "Demo Use Only" label will appear on the loading screen. If you decide to commercialize your project at any point, switch to "Commercial" in the Project Dashboard.
Educational Use: Educational projects are intended for educational purposes only, such as a classroom setting. You can develop and publish unlimited educational projects. An "Educational Use" label will appear on the loading screen. If your project is intended for commercial use, you must select "Commercial". If you are an educational institution please 8th Wall for information on a custom Education plan.
The project dashboard is your hub for managing 8th Wall projects. From the project dashboard page you can manage project settings, access the 8th Wall Cloud Editor, purchase commercial licenses, manage image targets, setup custom domains, and more.
The direct URL to your Project Dashboard is in the format: www.8thwall.com/workspacename/projectname
Project Dashboard Overview
8th Wall Projects fall into one of the following categories:
Commercial: Commercial projects are intended for commercial use. You can develop unlimited commercial projects with your plan at no additional charge. When you’re ready to launch a commercial project so that the world can see it, you will need to purchase a monthly Commercial License which varies based on views. NOTE: Commercial projects cannot be purchased during a free trial. If you need to purchase a commercial license, you can end your free trial early and begin your paid subscription.
Non-Commercial: Your paid subscription allows you to develop and publish unlimited non-commercial projects. A "Non-Commercial" label will appear on the loading screen. If you decide to commercialize your project at any point, switch to "Commercial" in the Project Dashboard.
Demo Use: You may create unlimited demo projects which are publicly viewable and strictly intended for pitching prospective work. A "Demo Use Only" label will appear on the loading screen. If you decide to commercialize your project at any point, switch to "Commercial" in the Project Dashboard.
Educational Use: Educational projects are intended for educational purposes only, such as a classroom setting. You can develop and publish unlimited educational projects. An "Educational Use" label will appear on the loading screen. If your project is intended for commercial use, you must select "Commercial". If you are an educational institution please 8th Wall for information on a custom Education plan.
If you selected the wrong project type during initial creation, please use the Project Dashboard to change the project type as appropriate.
Follow the wizard and purchase the desired commercial license:
To manage image targets for a given Project, click either the Image Target icon in the left navigation, or the "Manage Image Targets" link on the Project Dashboard.
For detailed information on Image Targets, please refer to the Image Target documentation.
8th Wall allows you to use custom domains for both Self-Hosted projects as well as 8th Wall hosted projects.
Self Hosted Projects
If you have upgraded to a paid plan, you can host your WebAR project publicly on your own web server (and view without device authorization). In order to do so, you will need to specify a list of approved URLs that are approved to host your Project.
From the Project Dashboard page, select "Manage domains"
Expand "I am hosting this project myself"
Enter the domains where you will be self-hosting your project. A domain may not contain a wildcard, path, or port. Click the "+" to add multiple.
Note: Self-Hosted domains are subdomain specific - e.g. "mydomain.com" is NOT the same as "www.mydomain.com". If you will be hosting at both mydomain.com and www.mydomain.com, you must specify BOTH.
8th Wall Hosted Project
If you are using the Cloud Editor to develop your WebAR project you can take advantage of 8th Wall's Built-In Hosting.
By default, 8th Wall provides 8thwall.app URLs (e.g. myworkspace.8thwall.app/my-project-name) for hosted projects.
If you have your own domain and want to use it with an 8th Wall hosted project, you can connect your domain to your 8th Wall project (or workspace) while keeping it registered with its current registrar. To do so you'll need to update your domain's DNS settings.
NOTE: It is recommended that you use a subdomain (e.g. ar.mydomain.com) instead of the root domain (e.g. mydomain.com) as not all DNS providers support CNAME or ALIAS records for the root domain. Please contact your DNS provider to see if they support CNAME or ALIAS records for the root domain.
From the Project Dashboard page, select "Manage domains"
Expand "I am hosting this project on 8th Wall"
Enter your custom domain (e.g. www.mydomain.com), and optionally any additional domains you want redirected to your custom domain.
Click Connect. This operation can take a minute or two. Click the "Refresh status" button if needed.
Verify ownership of your domain. In order to verify that you are the owner of the custom domain, you must login to your DNS registrar's website and add a verification record to your domain. These changes can take up to 24 hours to propagate.
Once verification is complete, add DNS records to connect your domain(s) to your project.
Commercial licenses, by default, will run indefinitely until you decide to cancel. Ending a campaign will remove its commercial license and the WebAR project will be disabled.
Campaign Duration settings can be managed from the Project Dashboard. The following options are available:
To modify, Click "Edit". Make your changes and click "Update" to save your settings.
To cancel the campaign immediately, visit the workspace Account page and manage commercial licenses.
When a launched project is cancelled or completed, the WebAR project can no longer be viewed. Users visiting the site will see an error message stating that the project is no longer available. It is a best practice to redirect users to another URL once your campaign is over.
Specify a Campaign Redirect URL to automatically redirect your users to a different site when your campaign has ended.
Campaign Redirect URLs are supported with both 8th Wall hosted and Self-hosted Projects.
From the Project Dashboard, click "Connect a URL" and enter the desired redirect URL
As a convenience, 8th Wall branded QR codes (aka "8 Codes") can be generated for a Project, making it easy to scan from mobile device to access your WebAR project. You are always welcome to generate your own QR codes, or use third-party QR code generation websites or services.
An "8th.io" shortlink will also be generated.
To generate a QR code, enter the desired URL and click Connect.
The generated QR code can be downloaded in either PNG or SVG format to be included on a website, physical media, or other locations to make it easy for users to scan with their smartphones to visit the connected URL.
Example:
8th Wall Projects provide basic usage analytics so that you can see how many times it has been viewed in the past 30 days. The usage graph is a rolling 30-day window and can display either total or daily usage during that time period.
Projects with usage based commercial licenses will also display view counts for the current billing period. Usage is measured in 100 view increments. Usage from previous months can be found in the Billing Summary of the Account page.
The Project Settings page allows you to:
Edit Project information:
The following Code Editor preferences can be set:
Dark Mode (On/Off)
Keybindings
Enable keybindings from popular text editors. Select from:
Project Settings allows you to edit the Basic Information for your Project
Project Title
Description
Enable/Disable default splash screen
Update cover image
When your app is staged to XXXXX.staging.8thwall.app (where XXXX represents your Workspace URL), it is passcode protected. In order to view the WebAR Project a user must first enter the passcode you provide. This is a great way to preview projects with clients or other stakeholders prior to launching publicly.
A passcode should be 5 or more characters and can include letters (A-Z, lower or upper case), numbers (0-9) and spaces.
If you have upgraded to a paid plan, you can host your Web Application publicly on your own web server (and viewed without device authorization). In order to do so, you will need to specify a list of approved URLs that are approved to host your Project.
From the Project Dashboard page, select "Manage domains".
Expand "I am hosting this project myself"
Enter the domains where you will be self-hosting your project. A domain may not contain a wildcard, path, or port. Click the "+" to add multiple.
Note: Self-Hosted domains are subdomain specific - e.g. "mydomain.com" is NOT the same as "www.mydomain.com". If you will be hosting at both mydomain.com and www.mydomain.com, you must specify BOTH.
If you are building a Self-hosted Project, you'll need to add your App Key to the project.
Click the Copy button and then paste it into your index.html
Example:
<script src="//apps.8thwall.com/xrweb?appKey=XXXXXXXXX"></script>
(Replace the XXX's with your App Key string)
NOTE: This is only available to workspaces on paid plans.
For each public Project, you can specify the version of the XR engine used when serving public web clients.
If you select a Channel (release or beta), public clients will always be served the most recent version of 8th Wall Web from that channel. If you freeze the version, you will need to manually unfreeze to receive the latest features and improvements of the engine.
In general, 8th Wall recommends using the official release channel for production web apps.
If you would like to test your web app against a pre-release version of 8th Wall Web, which may contain new features and/or bug fixes that haven't gone through full QA yet, you can switch to the beta channel:
To Freeze to a specific version, select the desired Channel (release or beta) and click the Freeze button
To Re-join a Channel and stay up-to-date, click the Unfreeze button. This will unfreeze the Engine Version associated with your Project and re-join a Channel (release or beta).
Unpublishing your project will remove it from staging (XXXXX.staging.8thwall.app) or public (XXXXX.8thwall.app).
You can publish it again at any time from the Code Editor or Project History pages.
Click Unpublish Staging to take your Project down from XXXXX.staging.8thwall.app
Click Unpublish Public to take your Project down from XXXXX.8thwall.app
If you disable your project, your app will not be viewable. Views will not be counted while disabled.
You will still be charged for any active commercial licenses on projects that are temporily disabled.
Toggle the slider to Disable / Enable your project.
A project with a commercial license cannot be deleted. Visit the Account page to cancel an active commercial project.
Deleting an Project will cause it to stop working. You cannot undo this operation.
Bring signage, magazines, boxes, bottles, cups, and cans to life with 8th Wall Image Targets. 8th Wall Web can detect and track flat, cylindrical and conical shaped image targets, allowing you to bring static content to life.
Not only can your designated image target trigger a web AR experience, but your content also has the ability to track directly to it.
Image targets can work in tandem with our World Tracking (SLAM), enabling experiences that combine image targets and markerless tracking.
You may track up to 5 image targets simultaneously with World Tracking enabled or up to 10 when it is disabled.
Up to 5 image targets per project can be "Autoloaded". An Autoloaded image target is enabled immediately as the page loads. This is useful for apps that use 5 or fewer image targets such as product packaging, a movie poster or business card.
The set of active image targets can be changed at any time by calling XR8.XrController.configure(). This lets you manage hundreds of image targets per project making possible use cases like geo-fenced image target hunts, AR books, guided art museum tours and much more. If your project utilizes SLAM most of the time but image targets some of the time, you can improve performance by only loading image targets when you need them. You can even read uploaded target names from URL parameters stored in different QR Codes, allowing you to have different targets initially load in the same web app depending on which QR Codes the user scans to enter the experience.
Flat | ![]() |
Track 2D images like posters, signs, magazines, boxes, etc. |
Cylindrical | ![]() |
Track images wrapped around cylindrical items like cans and bottles. |
Conical | ![]() |
Track images wrapped around objects with different a top vs bottom circumference like coffee cups, etc. |
Dimensions:
Maximum length or width: 2048 pixels.
You may track up to 5 image targets simultaneously while World Tracking (SLAM) is running. If you disable World Tracking (SLAM) by setting "disableWorldTracking: true", specify your image target set programmatically, you may track up to 10 simultaneously.
Click the Image Target icon in the left navigation or the "Manage Image Targets" link on the Project Dashboard to manage your image targets.
This screen allows you to create, edit, and delete the image targets associated with your project. Click on an existing image target to edit. Click the "+" icon for the desired image target type to create a new one.
Upload Flat Image Target: Drag your image (.jpg, .jpeg or .png) into the upload panel, or click within the dotted region and use your file browser to select your image.
Set Tracking Region (and Orientation): Use the slider to set the region of the image that will be used to detect and track your target within the WebAR experience. The rest of the image will be discarded, and the region which you specify will be tracked in your experience.
Upload Flat Image Target: Drag your image (.jpg, .jpeg or .png) into the upload panel, or click within the dotted region and use your file browser to select your image.
Set Tracking Region (and Orientation): Use the slider to set the region of the image that will be used to detect and track your target within the WebAR experience. The rest of the image will be discarded, and the region which you specify will be tracked in your experience.
Set Small Arc Alignment: Do the same for the small arc. Drag the slider until the blue line overlays the uploaded image's small arc.
Set Tracking Region (and Orientation): Drag and zoom on the image to set the portion of the image that is detected and tracked. This should be the most feature rich area of your image.
Click on any of the image targets under My Image Targets to view and/or modify their properties:
Type | Fields |
---|---|
Flat | ![]() |
Cylindrical | ![]() |
Conical | ![]() |
The set of active image targets can be modified at runtime by calling XR8.XrController.configure()
Note: All currently active image targets will be replaced with the ones specified in this list.
XR8.XrController.configure({imageTargets: ['image-target1', 'image-target2', 'image-target3']})
To ensure the highest quality image target tracking experience, be sure to follow these guidelines when selecting an image target.
DO have:
DON'T have:
Color: Image target detection cannot distinguish between colors, so don't rely on it as a key differentiator between targets.
For best results, use images on flat, cylindrical or conical surfaces for image target tracking.
Consider the reflectivity of your image target's physical material. Glossy surfaces and screen reflections can lower tracking quality. Use matte materials in diffuse lighting conditions for optimal tracking quality.
Note: Detection happens fastest in the center of the screen.
Good Markers | Bad Markers |
---|---|
![]() |
![]() |
![]() |
![]() |
8th Wall Web emits Events / Observables for various events in the image target lifecycle (e.g. imageloading, imagescaning, imagefound, imageupdated, imagelost) Please see the API reference for instructions on handling these events in your Web Application:
Example Projects
https://github.com/8thwall/web/tree/master/examples/aframe/artgallery
https://github.com/8thwall/web/tree/master/examples/aframe/flyer
8th Wall's XRExtras library provides modules that handle the most common WebAR application needs, including the load screen, social link-out flows and error handling.
The Loading
module displays a loading overlay and camera permissions prompt while libraries are loading, and while the camera is starting up. It's the first thing your users see when they enter your WebAR experience.
This section describes how to customize the loading screen by providing values that change the color, load spinner, and load animation to match the overall design of your experience.
ID's / Classes to override
Loading Screen | iOS (13+) Motion Sensor Prompt |
---|---|
![]() |
![]() |
|
To customize the text, you can use a MutationObserver. Please refer to code example below. |
A-Frame component parameters
If you are using XRExtras with an A-Frame project, the xrextras-loading
module makes it easy to customize the load screen via the following parameters:
Parameter | Type | Description |
---|---|---|
cameraBackgroundColor | Hex Color | Background color of the loading screen's top section behind the camera icon and text (See above. Loading Screen #1) |
loadBackgroundColor | Hex Color | Background color of the loading screen's lower section behind the loadImage (See above. Loading Screen #3) |
loadImage | ID | The ID of an image. The image needs to be an <a-asset> (See above. Loading Screen #4) |
loadAnimation | String | Animation style of loadImage . Choose from spin (default), pulse , scale , or none |
<a-scene tap-place xrextras-almost-there xrextras-loading=" loadBackgroundColor: #007AFF; cameraBackgroundColor: #5AC8FA; loadImage: #myCustomImage; loadAnimation: pulse" xrextras-runtime-error xrweb> <a-assets> <img id="myCustomImage" src="assets/my-custom-image.png"> </a-assets>
const load = () => {
XRExtras.Loading.showLoading()
console.log('customizing loading spinner')
const loadImage = document.getElementById("loadImage")
if (loadImage) {
loadImage.src="img/my-custom-image.png"
}
}
window.XRExtras ? load() : window.addEventListener('xrextrasloaded', load)
#requestingCameraPermissions { color: black; background-color: white; } #requestingCameraIcon { /* This changes the image from white to black */ filter: invert(1); } .prompt-box-8w { background-color: white; color: #00FF00; } .prompt-button-8w { background-color: #0000FF; } .button-primary-8w { background-color: #7611B7; }
let inDom = false
const observer = new MutationObserver(() => {
if (document.querySelector('.prompt-box-8w')) {
if (!inDom) {
document.querySelector('.prompt-box-8w p').innerHTML = '<strong>My new text goes here</strong><br/><br/>Press Approve to continue.'
document.querySelector('.prompt-button-8w').innerHTML = 'Deny'
document.querySelector('.button-primary-8w').innerHTML = 'Approve'
}
inDom = true
} else if (inDom) {
inDom = false
observer.disconnect()
}
})
observer.observe(document.body, {childList: true})
8th Wall's XRExtras library provides modules that handle the most common WebAR application needs, including the load screen, social link-out flows and error handling.
The XRExtras MediaRecorder module makes it easy to customize the Video Recording user experience in your project.
This section describes how to customize recorded videos with things like capture button behavior (tap vs hold), add video watermarks, max video length, end card behavior and looks, etc.
A-Frame primitives
xrextras-capture-button
: Adds a capture button to the scene.
Parameter | Type | Default | Description |
---|---|---|---|
capture-mode | string | "standard" | Sets the capture mode behavior. standard: tap to take photo, tap + hold to record video. fixed: tap to toggle video recording. photo: tap to take photo. One of [standard, fixed, photo] |
xrextras-capture-config
: Configures the captured media.
Parameter | Type | Default | Description |
---|---|---|---|
max-duration-ms | int | 15000 | Total video duration (in miliseconds) that the capture button allows. If the end card is disabled, this corresponds to max user record time. 15000 by default. |
max-dimension | int | 1280 | Maximum dimension (width or height) of captured video. For photo configuration, please see XR8.CanvasScreenthot.configure() |
enable-end-card | bool | true | Whether the end card is included in the recorded media. |
cover-image-url | string | Image source for end card cover image. Uses project's cover image by default. | |
end-card-call-to-action | string | "Try it at: " | Sets the text string for call to action on end card. |
short-link | string | Sets the text string for end card shortlink. Uses project shortlink by default. | |
footer-image-url | string | Powered by 8th Wall image | Image source for end card footer image. |
watermark-image-url | string | null | Image source for watermark. |
watermark-max-width | int | 20 | Max width (%) of watermark image. |
watermark-max-height | int | 20 | Max height (%) of watermark image. |
watermark-location | string | "bottomRight" | Location of watermark image. One of topLeft, topMiddle, topRight, bottomLeft, bottomMiddle, bottomRight |
file-name-prefix | string | "my-capture-" | Sets the text string that prepends the unique timestamp on file name. |
request-mic | string | "auto" | Determines if you want to set up the microphone during initialization ("auto") or during runtime ("manual") |
include-scene-audio | bool | true | If true, the A-Frame sounds in the scene will be part of the recorded output. |
xrextras-capture-preview
: Adds a media preview prefab to the scene which allows for playback, downloading, and sharing.
Parameter | Type | Default | Description |
---|---|---|---|
action-button-share-text | string | "Share" | Sets the text string in action button when Web Share API 2 is available (iOS 14, Android). “Share” by default. |
action-button-view-text | string | "View" | Sets the text string in action button when Web Share API 2 is not available in iOS (iOS 13). “View” by default. |
XRExtras.MediaRecorder Events
XRExtras.MediaRecorder emits the following events.
Events Emitted
Event Emitted | Description |
---|---|
mediarecorder-photocomplete | Emitted after a photo is taken. |
mediarecorder-recordcomplete | Emitted after a video recording is complete. |
mediarecorder-previewopened | Emitted after recording preview is opened. |
mediarecorder-previewclosed | Emitted after recording preview is closed. |
<xrextras-capture-button capture-mode="standard"></xrextras-capture-button> <xrextras-capture-config max-duration-ms="15000" max-dimension="1280" enable-end-card="true" cover-image-url="" end-card-call-to-action="Try it at:" short-link="" footer-image-url="//cdn.8thwall.com/web/img/almostthere/v2/poweredby-horiz-white-2.svg" watermark-image-url="//cdn.8thwall.com/web/img/mediarecorder/8logo.png" watermark-max-width="100" watermark-max-height="10" watermark-location="bottomRight" file-name-prefix="my-capture-" ></xrextras-capture-config> <xrextras-capture-preview action-button-share-text="Share" action-button-view-text="View" ></xrextras-capture-preview>
#actionButton { /* change color of action button */ background-color: #007aff !important; }
8th Wall projects provide basic usage analytics, allowing you to see how many "views" you have received in the past 30 days. If you are looking for more detailed and/or historical analytics, we recommend adding 3rd party web analytics to your WebAR experience.
The process for adding analytics to a WebAR experience is the same as adding them to any non-AR website. You are welcome to use any analytics solution you prefer.
In this example, we’ll explain how to add Google Analytics to your 8th Wall project using Google Tag Manager (GTM) - making it easy to collect custom analytics on how users are both viewing and interacting with your WebAR experience.
Using GTM’s web-based user interface, you can define tags and create triggers that cause your tag to fire when certain events occur. In your 8th Wall project, fire events (using a single line of Javascript) at desired places in your code.
You must already have Google Analytics and Google Tag Manager accounts and have a basic understanding of how they work.
For more information, please refer to the following Google documentation:
Google Analytics
Google Tag Manager
import * as googleTagManagerHtml from './gtm.html'
document.body.insertAdjacentHTML('afterbegin', googleTagManagerHtml)
Example:
At a minimum, create a Tag that will fire upon page load so that you can track information about visitors to your Web AR experience.
Create Tag
GTM also provides the ability to fire events when custom actions take place inside the WebAR experience. These events will be particular to your WebAR project, but some examples might be:
In this example, we’ll create a Tag (with Trigger) and add it to the "AFrame: Place Ground" sample project that fires each time a 3D model is spawned.
Create Custom Event Trigger
Create Tag
Next, create a tag that will fire when the "placeModel" trigger is fired in your code.
IMPORTANT: Make sure to save all triggers/tags created and then Submit/Publish your settings inside the GTM interface so they are live. See https://support.google.com/tagmanager/answer/6107163
Fire Event Inside 8th Wall Project
In your 8th Wall project, add the following line of javascript to fire this trigger at the desired place in your code:
window.dataLayer.push({event: 'placeModel'})
export const tapPlaceComponent = {
init: function() {
const ground = document.getElementById('ground')
ground.addEventListener('click', event => {
// Create new entity for the new object
const newElement = document.createElement('a-entity')
// The raycaster gives a location of the touch in the scene
const touchPoint = event.detail.intersection.point
newElement.setAttribute('position', touchPoint)
const randomYRotation = Math.random() * 360
newElement.setAttribute('rotation', '0 ' + randomYRotation + ' 0')
newElement.setAttribute('visible', 'false')
newElement.setAttribute('scale', '0.0001 0.0001 0.0001')
newElement.setAttribute('shadow', {
receive: false,
})
newElement.setAttribute('class', 'cantap')
newElement.setAttribute('hold-drag', '')
newElement.setAttribute('gltf-model', '#treeModel')
this.el.sceneEl.appendChild(newElement)
newElement.addEventListener('model-loaded', () => {
// Once the model is loaded, we are ready to show it popping in using an animation
newElement.setAttribute('visible', 'true')
newElement.setAttribute('animation', {
property: 'scale',
to: '7 7 7',
easing: 'easeOutElastic',
dur: 800,
})
// **************************************************
// Fire Google Tag Manager event once model is loaded
// **************************************************
window.dataLayer.push({event: 'placeModel'})
})
})
}
}
The Asset bundle feature of 8th Wall's Cloud Editor allows for the use of multi-file assets. These assets typically involve files that reference each other internally using relative paths. ".glTF", ".hcap", ".msdf" and cubemap assets are a few common examples.
In the case of .hcap files, you load the asset via the "main" file, e.g. "my-hologram.hcap". Inside this file are many references to other dependent resources, such as .mp4 and .bin files. These filenames are referenced and loaded by the main file as URLs with paths relative to the .hcap file.
Use one of the following methods to prepare your files before upload:
Option 1:
In the Cloud Editor, click the "+" to the right of ASSETS and select "New asset bundle". Next, select asset type. If you aren't uploading a glTF or HCAP asset, select "Other".
Option 2:
Alternatively, you can drag the assets or ZIP directly into the ASSETS pane at the bottom-right of the Cloud Editor.
After the files have been uploaded, you'll be able to preview the assets before adding it to your project. Select individual files in the left pane to preview them on the right.
If your asset type requires you reference a file, set this file as your "main file". If your asset type requires you reference a folder (cubemaps, etc), set "none" as your "main file".
Note: This step is not required for glTF or HCAP assets. The main file is set automatically for these asset types.
The main file cannot be changed later. If you select the wrong file, you'll have to re-upload the asset bundle.
Give the asset bundle a name. This is the filename by which you'll access the asset bundle within your project.
The upload of your asset bundle will be completed and it will be added to your Cloud Editor project.
Assets can be previewed directly within the Cloud Editor. Select an asset on the left to preview on the right. You can preview a specific asset inside the bundle by expanding the "Show contents" menu on the right and selecting an asset inside.
To rename an asset, click the "down arrow" icon to the right of your asset and choose Rename. Edit the name of the asset and hit Enter to save. Important: if you rename an assset, you'll need to go through your project and make sure all references point to the updated asset name.
To delete an asset, click the "down arrow" icon to the right of your asset and choose Delete.
To reference the asset bundle from an html file in your project (e.g. body.html), simply provide the appropriate path to the src= or gltf-model= parameter.
To reference the asset bundle from javascript, use require()
<!-- Example 1 --> <a-assets> <a-asset-item id="myModel" src="assets/sand-castle.gltf"></a-asset-item> </a-assets> <a-entity id="model" gltf-model="#myModel" class="cantap" scale="3 3 3" shadow="receive: false"> </a-entity> <!-- Example 2 --> <holo-cap id="holo" src="./assets/my-hologram.hcap" holo-scale="6" holo-touch-target="1.65 0.35" xrextras-hold-drag xrextras-two-finger-rotate xrextras-pinch-scale="scale: 6"> </holo-cap>
const modelFile = require('./assets/my-model.gltf')
Starting with iOS 9.2, Safari blocked deviceorientation and devicemotion event access from cross-origin iframes.
This prevents 8th Wall Web (if running inside the iframe) from receiving necessary deviceorientation and devicemotion data required for proper tracking if SLAM is enabled. (See Web Browser Requirements. The result is that the orientation of your digital content will appear to be wrong, and the content will "jump" all over the place when you move the phone.
If you have access to the parent window, it's possible to add a script on the parent page that will send custom messages containing deviceorientation and devicemotion data to 8th Wall's AR Engine inside the iframe via JavaScript's postMessage()
method. The postMessage()
method safely enables cross-origin communication between Window objects; e.g., between a page and an iframe embedded within it. (See https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage)
For maximum compatibility with iOS devices, we have created two scripts:
For the OUTER website
iframe.js must be included in the HEAD of the OUTER page via this script tag:
<script src="//cdn.8thwall.com/web/iframe/iframe.js"></script>
When starting AR, register the XRIFrame by iframe ID:
window.XRIFrame.registerXRIFrame(IFRAME_ID)
When stoppping AR, deregister the XRIFrame:
window.XRIFrame.deregisterXRIFrame()
For the INNER website
iframe-inner.js must be included in the HEAD of your INNER AR website with this script tag:
<script src="//cdn.8thwall.com/web/iframe/iframe-inner.js"></script>
By allowing the inner and outer windows to communicate, deviceorientation/devicemotion data can be shared.
See sample project at https://www.8thwall.com/8thwall/inline-ar
<!-- Send deviceorientation/devicemotion to the INNER iframe --> <script src="//cdn.8thwall.com/web/iframe/iframe.js"></script> ... const IFRAME_ID = 'my-iframe' // Iframe containing AR content. const onLoad = () => { window.XRIFrame.registerXRIFrame(IFRAME_ID) } // Add event listenters and callbacks for the body DOM. window.addEventListener('load', onLoad, false) ... <body> <iframe id="my-iframe" style="border: 0; width: 100%; height: 100%" allow="camera;microphone;gyroscope;accelerometer;" src="https://www.other-domain.com/my-web-ar/"> </iframe> </body>
<head> <!-- Recieve deviceorientation/devicemotion from the OUTER window --> <script src="//cdn.8thwall.com/web/iframe/iframe-inner.js"></script> </head> ... <body> <!-- For A-FRAME --> <!-- NOTE: The iframe-inner script must load after A-FRAME, and iframe-inner component must appear before xrweb. --> <a-scene iframe-inner xrweb> ... </a-scene>
<head> <!-- Recieve deviceorientation/devicemotion from the OUTER window --> <script src="//cdn.8thwall.com/web/iframe/iframe-inner.js"></script> </head> ... <!-- For non-AFrame projects, add iframeInnerPipelineModule to the custom pipeline module section, typically located in "onxrloaded" like so: --> XR8.addCameraPipelineModules([ // Custom pipeline modules iframeInnerPipelineModule, ])
Progressive Web Apps (PWAs) use modern web capabilities to offer users an experience that's similar to a native application. The 8th Wall Cloud Editor allows you to create a PWA version of your project so that users can add it to their home screen. Users must be connected to the internet in order to access it.
NOTE: Progressive Web Apps are only availalbe to accounts with a paid plan.
To enable PWA support for your WebAR project:
Note: For Cloud Editor projects, you may be prompted to build & re-publish your project if it was previously published. If you decide not to republish, PWA support will be included the next time your project is built.
8th Wall's XRExtras library provides an API to automatically display an install prompt in your web app.
Please refer to the PwaInstaller
API reference at https://github.com/8thwall/web/tree/master/xrextras/src/pwainstallermodule
Dimensions:
Minimum: 512 x 512 pixels
The PwaInstaller module from XRExtras displays an install prompt asking your user to add your web app to their home screen.
To customize the look of your install prompt, you can provide custom string values through the XRExtras.PwaInstaller.configure() API.
For a completely custom install prompt, configure the installer with displayInstallPrompt and hideInstallPrompt methods.
For Self-Hosted apps, we aren’t able to automatically inject details of the PWA into the HTML, requiring use of the configure API with the name and icon they’d like to appear in the install prompt.
Add the following <meta>
tags to the <head>
of your html:
<meta name="8thwall:pwa_name" content="My PWA Name">
<meta name="8thwall:pwa_icon" content="//cdn.mydomain.com/my_icon.png">
<a-scene xrextras-almost-there xrextras-loading xrextras-runtime-error xrextras-pwa-installer xrweb>
XR8.addCameraPipelineModules([
XR8.GlTextureRenderer.pipelineModule(),
XR8.Threejs.pipelineModule(),
XR8.XrController.pipelineModule(),
XRExtras.AlmostThere.pipelineModule(),
XRExtras.FullWindowCanvas.pipelineModule(),
XRExtras.Loading.pipelineModule(),
XRExtras.RuntimeError.pipelineModule(),
XRExtras.PwaInstaller.pipelineModule(), // Added here
// Custom pipeline modules.
myCustomPipelineModule(),
])
<a-scene xrextras-gesture-detector xrextras-almost-there xrextras-loading xrextras-runtime-error xrextras-pwa-installer="name: My Cool PWA; iconSrc: '//cdn.8thwall.com/my_custom_icon'; installTitle: 'My CustomTitle'; installSubtitle: 'My Custom Subtitle'; installButtonText: 'Custom Install'; iosInstallText: 'Custom iOS Install'" xrweb>
XRExtras.PwaInstaller.configure({
displayConfig: {
name: 'My Custom PWA Name',
iconSrc: '//cdn.8thwall.com/my_custom_icon',
installTitle: ' My Custom Title',
installSubtitle: 'My Custom Subtitle',
installButtonText: 'Custom Install',
iosInstallText: 'Custom iOS Install',
}
})
<a-scene xrweb="disableWorldTracking: true" xrextras-gesture-detector xrextras-almost-there xrextras-loading xrextras-runtime-error xrextras-pwa-installer="minNumVisits: 5; displayAfterDismissalMillis: 86400000;" >
XRExtras.PwaInstaller.configure({
promptConfig: {
minNumVisits: 5, // Users must visit web app 5 times before prompt
displayAfterDismissalMillis: 86400000 // One day
}
})
If you are using 8th Wall Web with A-Frame, three.js or Babylon.js, we recommend using 3D models in GLB (glTF 2.0 binary) format in your Web AR experiences. We believe GLB is currently the best format for Web AR with its small file size, great performance and versatile feature support (PBR, animations, etc).
For more information about 3d model best practices and links to a number of GLB converters, please visit:
If you are on an paid plan, you gain the ability to self-host WebAR experiences. If you are self-hosting on a webserver that hasn't been whitelisted (see Connected Domains section of the documentation), you will need to authorize your device in order to view.
Authorizing a device installs a Developer Token (cookie) into its web browser, allowing it to view any app key within the current workspace.
There is no limit to the number of devices that can be authorized, but each device needs to be authorized individually. Views of your web application from an authorized device count toward your monthly usage total.
IMPORTANT: If you have followed the steps below on an iOS device, and are still having issues, please see the Troubleshooting section for steps to fix. Safari has a feature called Intelligent Tracking Prevention that can block third party cookies (what we use to authorize your device while you're developing). When they get blocked, we can't verify your device.
How to authorize a device:
Login to 8thwall.com and select a Project.
Click Device Authorization to expand the device authorization pane.
Select 8th Wall Engine version to use during development. To use the latest stable version of 8th Wall, select release. To test against a pre-release version, select beta.
From Desktop: If you are logged into the console on your laptop/desktop, Scan the QR code from the device you wish to authorize. This installs an authorization cookie on the device.
Note: A QR code can only be scanned once. After scanning, you will receive confirmation that your device has been authorized. The console will then generate a new QR code that can be scanned to authorize another device.
Before:
After:
Confirmation (Console) | Confirmation (On Device) |
---|---|
![]() |
![]() |
From Mobile: If you are logged into 8thwall.com directly on the mobile device you wish to authorize, simply click Authorize browser. Doing so installs an authorization cookie into your mobile browser, authorizing it to view any project within the current workspace.
Before:
After:
If you are on a paid plan, you gain the ability to host WebAR projects on your own web servers.
Serving web app locally from your computer can be tricky as browsers require HTTPS certificates to access the camera on your phone through a browser. As a convenience, 8th Wall has created a public GitHub repo (https://github.com/8thwall/web) where you can find a "serve" script that will run a local https webserver on your development computer. You can also download sample 8th Wall Web projects to help you get started with self-hosted configurations.
If you don't already have Node.js and npm installed, get it here: https://www.npmjs.com/get-npm
# cd <directory_where_you_saved_sample_project_files>
# cd serve
# npm install
# cd ..
# ./serve/bin/serve -d <sample_project_location>
Example:
./serve/bin/serve -n -d gettingstarted/xraframe/ -p 7777
IMPORTANT: To connect to this local webserver, make sure to copy the entire "Listening" URL into your browser, including both the "https://" at the beginning and port number at the end.
NOTE: If the serve script states it's listening on 127.0.0.1:<port> (which is the loopback device aka "localhost") your mobile phone won't be able to connect to that IP address directly. Please re-run the serve
script with the -i
flag to specify the network interface the serve script should listen on.
Example - specify network interface:
./serve/bin/serve -d gettingstarted/xraframe/ -p 7777 -i en0
If you have issues connecting to the local webserver running on your computer, please refer to the troubleshooting section
Serving web app locally from your computer can be tricky as browsers require HTTPS certificates to access the camera on your phone through a browser. As a convenience, 8th Wall has created a public GitHub repo (https://github.com/8thwall/web) where you can find a "serve" script that will run a local https webserver on your development computer. You can also download sample 8th Wall Web projects to help you get started.
If you don't already have Node.js and npm installed, get it here: https://www.npmjs.com/get-npm
Note: Run the following command using a standard Command Prompt window (cmd.exe). The script will generate errors if run from PowerShell.
# cd <directory_where_you_saved_sample_project_files>
# cd serve
# npm install
# cd ..
# serve\bin\serve.bat -d <sample_project_location>
Example:
serve\bin\serve.bat -n -d gettingstarted\xraframe -p 7777
IMPORTANT: To connect to this local webserver, make sure to copy the entire "Listening" URL into your browser, including both the "https://" at the beginning and port number at the end.
NOTE: If the serve script states it's listening on 127.0.0.1:<port> (which is the loopback device aka "localhost") your mobile phone won't be able to connect to that IP address directly. Please re-run the serve
script with the -i
flag to specify the network interface the serve script should listen on.
Example - specify network interface:
serve\bin\serve.bat -d gettingstarted\xraframe -p 7777 -i WiFi
If you have issues connecting to the local webserver running on your computer, please refer to the troubleshooting section
IMPORTANT: Make sure to copy the entire "Listening" URL into your browser, including both the "https://" at the beginning and port number at the end.
Example: https://192.168.1.50:8080
IMPORTANT: Make sure to copy the entire "Listening" URL into your browser, including both the "https://" at the beginning and port number at the end.
Example: https://192.168.1.50:8080
This section of the documentation is intended for advanced users who are using the 8th Wall Cloud Editor and need to create a completely customized version of XRExtras. This process involves:
If you only need to make basic customizations of the XRExtras loading screen, please refer to this section instead.
Note: By importing a copy of XRExtras into your Cloud Editor project, you will no longer receive the latest XRExtras updates and functionality available in from CDN. Make sure to always pull the latest version of XRExtras code from GitHub as you start new projects.
Instructions:
myxrextras
folder within your Cloud Editor projectmodule.exports
with export
:Examples:
myxrextras/aframe/aframe.js:
myxrextras/aframe/aframe.js:
Changing/Adding image assets
First, drag & drop new images into assets/ to upload them to your project:
In html files with src
params, refer to the image asset using a relative path:
<img src="../../assets/my-logo.png" id="loadImage" class="spin" />
In javascript files, use a relative path and require()
to reference assets:
img.src = require('../../assets/my-logo.png')
Release 15.2: (2020-December-14, v15.2.4.487)
New Features:
Fixes and Enhancements:
XRExtras Enhancements:
Release 15.1: (2020-October-27, v15.1.4.487)
New Features:
Fixes and Enhancements:
XRExtras Enhancements:
Release 15: (2020-October-09, v15.0.9.487 / 2020-September-22, v15.0.8.487)
New Features:
8th Wall Curved Image Targets:
Fixes and Enhancements:
XRExtras Enhancements:
New AFrame components for easy Curved Image Target development:
Release 14.2: (2020-July-30, v14.2.4.949)
New Features:
Updated MediaRecorder.configure() to provide more control over audio output and mixing:
Fixes and Enhancements:
Release 14.1: (2020-July-06, v14.1.4.949)
New Features:
Introducing 8th Wall Video Recording:
Fixes and Enhancements:
XRExtras Enhancements:
Record button prefab component for capturing video and photos:
Use XRExtras to easily customize the Video Recording user experience in your project:
Release 14: (2020-May-26)
New Features:
Fixes and Enhancements:
XRExtras Enhancements:
Release 13.2: (2020-Feb-13)
New Features:
Fixes and Enhancements:
XRExtras Enhancements:
Release 13.1:
New Features:
Fixes and Enhancements:
Release 13:
New Features:
Release 12.1:
Fixes and Enhancements:
Release 12:
New Features:
Fixes:
XRExtras:
Release 11.2:
New Features:
Release 11.1:
Fixes and Enhancements:
Release 11:
New Features:
Release 10.1:
New Features:
Fixes:
Release 10:
Release 10 adds a revamped web developer console with streamlined developer-mode, access to allowed origins and QR codes. It adds 8th Wall Web support for XRExtras, an open-source package for error handling, loading visualizations, "almost there" flows, and more.
New Features:
XR Extras provides a convenient solution for:
Fixes:
Release 9.3:
New Features:
Release 9.2:
New Features:
Release 9.1:
New Features:
Release 9:
Issue: When trying to view my Web App, I receive a "Device Not Authorized" error message.
Safari specific:
The situation:
Why does this happen?
Safari has a feature called Intelligent Tracking Prevention that can block third party cookies (what we use to authorize your device while you're developing). When they get blocked, we can't verify your device.
Steps to fix:
Settings>Safari>Prevent Cross-Site Tracking
Settings>Safari>Advanced>Website Data>8thwall.com
Settings>Safari>Clear History and Website Data
Otherwise
See Invalid App Key steps from #5 onwards for more troubleshooting.
Issue: When trying to view my Web App, I receive an "Invalid App Key" or "Domain Not Authorized" error message.
Troubleshooting steps:
Issue: As I move my phone, the camera position does not update.
Resolution: Check the position of the camera in your scene. The camera should NOT be at a height (Y) of zero. Set it to Non-Zero value. The Y position of the camera at start effectively determines the scale of virtual content on a surface (e.g. smaller y, bigger content)
Issue: Content in my scene doesn't appear to be "sticking" to a surface properly
Resolution:
To place an object on a surface, the base of the object needs to be at a height of Y=0
Note: Setting the position at a height of Y=0 isn't necesarily sufficient.
For example, if the transform your model is at the center of the object, placing it at Y=0 will result in part of the object living below the surface. In this case you'll need to adjust the vertical position of the object so that the bottom of the object sits at Y=0.
It's often helpful to visualize object positioning relative to the surface by placing a semi-transparent plane at Y=0.
<a-plane position="0 0 0" rotation="-90 0 0" width="4" height="4" material="side: double; color: #FFFF00; transparent: true; opacity: 0.5" shadow></a-plane>
// Create a 1x1 Plane with a transparent yellow material
var geometry = new THREE.PlaneGeometry( 1, 1, 1, 1 ); // THREE.PlaneGeometry (width, height, widthSegments, heightSegments)
var material = new THREE.MeshBasicMaterial( {color: 0xffff00, transparent:true, opacity:0.5, side: THREE.DoubleSide} );
var plane = new THREE.Mesh( geometry, material );
// Rotate 90 degrees (in radians) along X so plane is parallel to ground
plane.rotateX(1.5708)
plane.position.set(0, 0, 0)
scene.add( plane );
Issue:
I'm using the "serve" script (from 8th Wall Web's public GitHub repo: https://github.com/8thwall/web) to run a local webserver on my laptop and it says it's listening on 127.0.0.1. My phone is unable to connect to the laptop using that IP address.
"127.0.0.1" is the loopback address of your laptop (aka "localhost"), so other devices such as your phone won't be able to connect directly to that IP address. For some reason, the serve
script has decided to listen on the loopback interface.
Resolution:
Please re-run the serve
script with the -i
flag and specify the network interface you wish to use.
Example (Mac):
./serve/bin/serve -d gettingstarted/xraframe/ -p 7777 -i en0
Example (Windows):
Note: Run the following command using a standard Command Prompt window (cmd.exe). The script will generate errors if run from PowerShell.
serve\bin\serve.bat -d gettingstarted\xraframe -p 7777 -i WiFi
If you are still unable to connect, please check the following:
Need some help? 8th Wall is here to help you succeed. Contact us directly, or reach out to the community to get answers.
Ways to get help:
Slack | Email Support | Stack Overflow | GitHub |
---|---|---|---|
![]() |
![]() |
![]() |
![]() |
[1] Intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for 8th Wall’s products remain at the sole discretion of 8th Wall, Inc.
This section of the documentation contains details of 8th Wall Web's Javascript API.
Description
Entry point for 8th Wall's Javascript API
Functions
Function | Description |
---|---|
addCameraPipelineModule | Adds a module to the camera pipeline that will receive event callbacks for each stage in the camera pipeline. |
addCameraPipelineModules | Add multiple camera pipeline modules. This is a convenience method that calls addCameraPipelineModule in order on each element of the input array. |
clearCameraPipelineModules | Remove all camera pipeline modules from the camera loop. |
isPaused | Indicates whether or not the XR session is paused. |
pause | Pause the current XR session. While paused, the camera feed is stopped and device motion is not tracked. |
resume | Resume the current XR session. |
removeCameraPipelineModule | Removes a module from the camera pipeline. |
removeCameraPipelineModules | Remove multiple camera pipeline modules. This is a convenience method that calls removeCameraPipelineModule in order on each element of the input array. |
requiredPermissions | Return a list of permissions required by the application. |
run | Open the camera and start running the camera run loop. |
runPreRender | Executes all lifecycle updates that should happen before rendering. |
runPostRender | Executes all lifecycle updates that should happen after rendering. |
stop | Stop the current XR session. While stopped, the camera feed is closed and device motion is not tracked. |
version | Get the 8th Wall Web engine version. |
Events
Event Emitted | Description |
---|---|
xrloaded | This event is emitted once XR8 has loaded. |
Modules
Module | Description |
---|---|
AFrame | Entry point for A-Frame integration with 8th Wall Web. |
Babylonjs | Entry point for Babylon.js integration with 8th Wall Web. |
CameraPixelArray | Provides a camera pipeline module that gives access to camera data as a grayscale or color uint8 array. |
CanvasScreenshot | Provides a camera pipeline module that can generate screenshots of the current scene. |
FaceController | Provides face detection and meshing, and interfaces for configuring tracking. |
GlTextureRenderer | Provides a camera pipeline module that draws the camera feed to a canvas as well as extra utilities for GL drawing operations. |
MediaRecorder | Provides a camera pipeline module that allows you to record a video in MP4 format. |
PlayCanvas | Entry point for PlayCanvas integration with 8th Wall Web. |
Sumerian | Entry point for Sumerian integration with 8th Wall Web. |
Threejs | Provides a camera pipeline module that drives three.js camera to do virtual overlays. |
XrConfig | Specifying class of devices and cameras that pipeline modules should run on. |
XrController | XrController provides 6DoF camera tracking and interfaces for configuring tracking. |
XrDevice | Provides information about device compatibility and characteristics. |
XrPermissions | Utilities for specifying permissions required by a pipeline module. |
XR8.addCameraPipelineModule()
Description
8th Wall camera applications are built using a camera pipeline module framework. For a full description on camera pipeline modules, see CameraPipelineModule.
Applications install modules which then control the behavior of the application at runtime. A module object must have a .name string which is unique within the application, and then should provide one or more of the camera lifecycle methods which will be executed at the appropriate point in the run loop.
During the main runtime of an application, each camera frame goes through the following cycle:
onBeforeRun -> onCameraStatusChange (requesting -> hasStream -> hasVideo | failed) -> onStart -> onAttach -> onProcessGpu -> onProcessCpu -> onUpdate -> onRender
Camera modules should implement one or more of the following camera lifecycle methods:
Function | Description |
---|---|
onAppResourcesLoaded | Called when we have received the resources attached to an app from the server. |
onAttach | Called before the first time a module receives frame updates. It is called on modules that were added either before or after the pipeline is running. |
onBeforeRun | Called immediately after XR8.run(). If any promises are returned, XR will wait on all promises before continuing. |
onCameraStatusChange | Called when a change occurs during the camera permissions request. |
onCanvasSizeChange | Called when the canvas changes size. |
onDetach | Called after the last time a module receives frame updates. This is either after stop is called, or after the module is manually removed from the pipeline. |
onDeviceOrientationChange | Called when the device changes landscape/portrait orientation. |
onException | Called when an error occurs in XR. Called with the error object. |
onPaused | Called when XR8.pause() is called. |
onProcessCpu | Called to read results of GPU processing and return usable data. |
onProcessGpu | Called to start GPU processing. |
onRender | Called after onUpdate. This is the time for the rendering engine to issue any WebGL drawing commands. If an application is providing its own run loop and is relying on XR8.runPreRender() and XR8.runPostRender(), this method is not called and all rendering must be coordinated by the external run loop. |
onResume | Called when XR8.resume() is called. |
onStart | Called when XR starts. First callback after XR8.run() is called. |
onUpdate | Called to update the scene before render. Data returned by modules in onProcessGpu and onProcessCpu will be present as processGpu.modulename and processCpu.modulename where the name is given by module.name = "modulename". |
onVideoSizeChange | Called when the canvas changes size. |
requiredPermissions | Modules can indicate what browser capabilities they require that may need permissions requests. These can be used by the framework to request appropriate permissions if absent, or to create components that request the appropriate permissions before running XR. |
Note: Camera modules that implement onProcessGpu or onProcessCpu can provide data to subsequent stages of the pipeline. This is done by the module's name.
XR8.addCameraPipelineModule({
name = 'camerastartupmodule',
onCameraStatusChange = ({status}) {
if (status == 'requesting') {
myApplication.showCameraPermissionsPrompt()
} else if (status == 'hasStream') {
myApplication.dismissCameraPermissionsPrompt()
} else if (status == 'hasVideo') {
myApplication.startMainApplictation()
} else if (status == 'failed') {
myApplication.promptUserToChangeBrowserSettings()
}
},
})
// Install a module which gets the camera feed as a UInt8Array.
XR8.addCameraPipelineModule(
XR8.CameraPixelArray.pipelineModule({luminance: true, width: 240, height: 320}))
// Install a module that draws the camera feed to the canvas.
XR8.addCameraPipelineModule(XR8.GlTextureRenderer.pipelineModule())
// Create our custom application logic for scanning and displaying QR codes.
XR8.addCameraPipelineModule({
name = 'qrscan',
onProcessCpu = ({onProcessGpuResult}) => {
// CameraPixelArray.pipelineModule() returned these in onProcessGpu.
const { pixels, rows, cols, rowBytes } = onProcesGpuResult.camerapixelarray
const { wasFound, url, corners } = findQrCode(pixels, rows, cols, rowBytes)
return { wasFound, url, corners }
},
onUpdate = ({onProcessCpuResult}) => {
// These were returned by this module ('qrscan') in onProcessCpu
const {wasFound, url, corners } = onProcessCpuResult.qrscan
if (wasFound) {
showUrlAndCorners(url, corners)
}
},
})
XR8.addCameraPipelineModules([ modules ])
Description
Add multiple camera pipeline modules. This is a convenience method that calls addCameraPipelineModule in order on each element of the input array.
Parameters
Parameter | Description |
---|---|
modules | An array of camera pipeline modules. |
const onxrloaded = () => {
XR8.addCameraPipelineModules([ // Add camera pipeline modules.
// Existing pipeline modules.
XR8.GlTextureRenderer.pipelineModule(), // Draws the camera feed.
])
// Request camera permissions and run the camera.
XR8.run({canvas: document.getElementById('camerafeed')})
}
// Wait until the XR javascript has loaded before making XR calls.
window.onload = () => {window.XR ? onxrloaded() : window.addEventListener('xrloaded', onxrloaded)}
XR8.clearCameraPipelineModules()
Description
Remove all camera pipeline modules from the camera loop.
Parameters
None
XR8.clearCameraPipelineModules()
bool XR8.isPaused()
Parameters
None
Description
Indicates whether or not the XR session is paused.
// Call XR8.pause() / XR8.resume() when the button is pressed.
document.getElementById('pause').addEventListener(
'click',
() => {
if (!XR8.isPaused()) {
XR8.pause()
} else {
XR8.resume()
}
},
true)
XR8.pause()
Parameters
None
Description
Pause the current XR session. While paused, device motion is not tracked.
// Call XR8.pause() / XR8.resume() when the button is pressed.
document.getElementById('pause').addEventListener(
'click',
() => {
if (!XR8.isPaused()) {
XR8.pause()
} else {
XR8.resume()
}
},
true)
XR8.removeCameraPipelineModule(moduleName)
Description
Removes a module from the camera pipeline.
Parameters
Parameter | Description |
---|---|
moduleName | The string name string of a module. |
XR8.removeCameraPipelineModule('reality')
XR8.removeCameraPipelineModules([ moduleNames ])
Description
Remove multiple camera pipeline modules. This is a convenience method that calls removeCameraPipelineModule in order on each element of the input array.
Parameters
Parameter | Description |
---|---|
moduleNames | An array of objects with a name property, or a name strings of modules. |
XR8.removeCameraPipelineModules(['threejsrenderer', 'reality'])
XR8.requiredPermissions()
Parameters
None
Description
Return a list of permissions required by the application.
if (XR8.XrPermissions) {
const permissions = XR8.XrPermissions.permissions()
const requiredPermissions = XR8.requiredPermissions()
if (!requiredPermissions.has(permissions.DEVICE_ORIENTATION)) {
return
}
}
XR8.resume()
Parameters
None
Description
Resume the current XR session after it has been paused.
// Call XR8.pause() / XR8.resume() when the button is pressed.
document.getElementById('pause').addEventListener(
'click',
() => {
if (!XR8.isPaused()) {
XR8.pause()
} else {
XR8.resume()
}
},
true)
XR8.run(canvas, webgl2: true, ownRunLoop: true)
Parameters
Property | Type | Default | Description |
---|---|---|---|
canvas | HTMLCanvasElement | The HTML Canvas that the camera feed will be drawn to. | |
webgl2 [Optional] | bool | true | If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1. |
ownRunLoop [Optional] | bool | true | If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only] |
cameraConfig: {direction} [Optional] | object | {direction: XR8.XrConfig.camera().BACK} |
Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT |
glContextConfig [Optional] | WebGLContextAttributes | null | The attributes to configure the WebGL canvas context. |
allowedDevices [Optional] | XR8.XrConfig.device() | XR8.XrConfig.device().MOBILE |
Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY , always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE . |
Notes:
cameraConfig
: World tracking (SLAM) is only supported on the back
camera. If you are using the front
camera, you must disable world tracking by calling XR8.XrController.configure({disableWorldTracking: true})
first.Description
Open the camera and start running the camera run loop.
// Open the camera and start running the camera run loop
// In index.html: <canvas id="camerafeed"></canvas>
XR8.run({canvas: document.getElementById('camerafeed')})
// Disable world tracking (SLAM). This is required to use the front camera.
XR8.XrController.configure({disableWorldTracking: true})
// Open the camera and start running the camera run loop
// In index.html: <canvas id="camerafeed"></canvas>
XR8.run({canvas: document.getElementById('camerafeed'), cameraConfig: {direction: XR8.XrConfig.camera().FRONT}})
// Open the camera and start running the camera run loop with an opaque canvas.
// In index.html: <canvas id="camerafeed"></canvas>
XR8.run({canvas: document.getElementById('camerafeed'), glContextConfig: {alpha: false, preserveDrawingBuffer: false}})
XR8.runPreRender( timestamp )
Description
Executes all lifecycle updates that should happen before rendering.
IMPORTANT: Make sure that onStart has been called before calling runPreRender()/runPostRender().
Parameters
Parameter | Description |
---|---|
timestamp | The current time, in milliseconds. |
// Implement A-Frame components tick() method
function tick() {
// Check device compatibility and run any necessary view geometry updates and draw the camera feed.
...
// Run XR lifecycle methods
XR8.runPreRender(Date.now())
}
XR8.runPostRender()
Description
Executes all lifecycle updates that should happen after rendering.
IMPORTANT: Make sure that onStart has been called before calling runPreRender()/runPostRender().
Parameters
None
// Implement A-Frame components tock() method
function tock() {
// Check whether XR is initialized
...
// Run XR lifecycle methods
XR8.runPostRender()
}
XR8.stop()
Parameters
None
Description
While stopped, the camera feed is closed and device motion is not tracked. Must call XR8.run() to restart after the engine is stopped.
XR8.stop()
string XR8.version()
Parameters
None
Description
Get the 8th Wall Web engine version.
console.log(XR8.version())
A-Frame (https://aframe.io) is a web framework designed for building virtual reality experiences. By adding 8th Wall Web to your A-Frame project, you can now easily build augmented reality experiences for the web.
Adding 8th Wall Web to A-Frame
Cloud Editor
<meta name="8thwall:renderer" content="aframe">
Self Hosted
8th Wall Web can be added to your A-Frame project in a few easy steps:
<script src="//cdn.8thwall.com/web/aframe/8frame-0.9.2.min.js"></script>
<script src="//apps.8thwall.com/xrweb?appKey=XXXXX"></script>
World Tracking and/or Image Targets
xrweb
component to your a-scene tag:<a-scene xrweb>
xrweb Attributes
Component | Type | Default | Description |
---|---|---|---|
disableWorldTracking | bool | false | If true, turn off SLAM tracking for efficiency. |
cameraDirection | string | back | Desired camera to use. Choose from: back or front . Use cameraDirection: front; with mirroredDisplay: true; for selfie mode. Note that world tracking is only supported with cameraDirection: back; .` |
allowedDevices | string | "mobile" | Supported device classes. Choose from: 'mobile' or 'any' . Use 'any' to enable laptop or desktop-type devices with built-in or attached webcams. Note that world tracking is only supported on mobile . |
mirroredDisplay | bool | false | If true, flip left and right in the output geometry and reversie the direction of the camera feed. Use 'mirroredDisplay: true;' with 'cameraDirection: front;' for selfie mode. Should not be enabled if World Tracking (SLAM) is enabled. |
Notes:
cameraDirection
: World tracking (SLAM) is only supported on the back
camera. If you are using the front
camera, you must disable world tracking by setting disableWorldTracking: true
.xrweb
and xrface
cannot be used at the same time.Face Effects
xrface
component to your a-scene tag:<a-scene xrface>
xrface Attributes
Component | Type | Default | Description |
---|---|---|---|
cameraDirection | string | back | Desired camera to use. Choose from: back or front . Use cameraDirection: front; with mirroredDisplay: true; for selfie mode. |
allowedDevices | string | "mobile" | Supported device classes. Choose from: 'mobile' or 'any' . Use 'any' to enable laptop or desktop-type devices with built-in or attached webcams. |
mirroredDisplay | bool | false | If true, flip left and right in the output geometry and reversie the direction of the camera feed. Use 'mirroredDisplay: true;' with 'cameraDirection: front;' for selfie mode. |
meshGeometry | array | ['face'] | Configure which portions of the face mesh will have returned triangle indices. Can be any combination of 'face' , 'eyes' and/or 'mouth' . |
Notes:
xrweb
and xrface
cannot be used at the same time.Functions
Function | Description |
---|---|
xrwebComponent | Creates an A-Frame component for World Tracking and/or Image Target tracking which can be registered with AFRAME.registerComponent() . Generally won't need to be called directly. |
xrfaceComponent | Creates an A-Frame component for Face Effects tracking which can be registered with AFRAME.registerComponent() . Generally won't need to be called directly. |
<a-scene xrweb>
<a-scene xrweb="disableWorldTracking: true">
<a-scene xrweb="disableWorldTracking: true; cameraDirection: front">
XR8.AFrame.xrwebComponent()
Parameters
None
Description
Creates an A-Frame component which can be registered with AFRAME.registerComponent()
. This, however, generally won't need to be called directly. On 8th Wall Web script load, this component will be registered automatically if it is detected that A-Frame has loaded (i.e if window.AFRAME exists).
window.AFRAME.registerComponent('xrweb', XR8.AFrame.xrwebComponent())
This section describes the events emitted by the "xrweb" or "xrface" A-Frame component.
You can listen for these events in your web application to call a function that handles the event.
Events Emitted
The following events are emitted by both "xrweb" and "xrface":
Event Emitted | Description |
---|---|
camerastatuschange | This event is emitted when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status. |
realityerror | This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed. |
realityready | This event is emitted when 8th Wall Web has initialized and at least one frame has been successfully processed. This is the recommended time at which any loading elements should be hidden. |
screenshoterror | This event is emitted in response to the screenshotrequest resulting in an error. |
screenshotready | This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image of the AFrame canvas will be provided. |
Events Emitted by xrweb
Event Emitted | Description |
---|---|
xrimageloading | This event is emitted when detection image loading begins. |
xrimagescanning | This event is emitted when all detection images have been loaded and scanning has begun. |
xrimagefound | This event is emitted when an image target is first found. |
xrimageupdated | This event is emitted when an image target changes position, rotation or scale. |
xrimagelost | This event is emitted when an image target is no longer being tracked. |
Events Emitted by xrface
Event Emitted | Description |
---|---|
xrfaceloading | This event is emitted when when loading begins for additional face AR resources. |
xrfacescanning | This event is emitted when AR resources have been loaded and scanning has begun. |
xrfacefound | This event is emitted when a face is first found. |
xrfacepdated | This event is emitted when face is subsequently found. |
xrfacelost | This event is emitted when a face is no longer being tracked. |
Description
This event is emitted when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status.
var handleCameraStatusChange = function handleCameraStatusChange(event) {
console.log('status change', event.detail.status);
switch (event.detail.status) {
case 'requesting':
// Do something
break;
case 'hasStream':
// Do something
break;
case 'failed':
event.target.emit('realityerror');
break;
}
};
let scene = this.el.sceneEl
scene.addEventListener('camerastatuschange', handleCameraStatusChange)
Description
This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed.
let scene = this.el.sceneEl
scene.addEventListener('realityerror', (event) => {
if (XR8.XrDevice.isDeviceBrowserCompatible()) {
// Browser is compatible. Print the exception for more information.
console.log(event.detail.error)
return
}
// Browser is not compatible. Check the reasons why it may not be.
for (let reason of XR8.XrDevice.incompatibleReasons()) {
// Handle each XR8.XrDevice.IncompatibilityReasons
}
})
Description
This event is emitted when 8th Wall Web has initialized.
let scene = this.el.sceneEl
scene.addEventListener('realityready', () => {
// Hide loading UI
})
Description
This event is emitted in response to the screenshotrequest resulting in an error.
let scene = this.el.sceneEl
scene.addEventListener('screenshoterror', (event) => {
console.log(event.detail)
// Handle screenshot error.
})
Description
This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image of the AFrame canvas will be provided.
let scene = this.el.sceneEl
scene.addEventListener('screenshotready', (event) => {
// screenshotPreview is an <img> HTML element
const image = document.getElementById('screenshotPreview')
image.src = 'data:image/jpeg;base64,' + event.detail
})
Description
This event is emitted by xrweb
when detection image loading begins.
imageloading.detail : { imageTargets: {name, type, metadata} }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' . |
metadata | User metadata. |
const componentMap = {}
const addComponents = ({detail}) => {
detail.imageTargets.forEach(({name, type, metadata}) => {
// ...
})
}
this.el.sceneEl.addEventListener('xrimageloading', addComponents)
Description
This event is emitted by xrweb
when all detection images have been loaded and scanning has begun.
imagescanning.detail : { imageTargets: {name, type, metadata, geometry} }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' . |
metadata | User metadata. |
geometry | Object containing geometry data. If type=FLAT: {scaledWidth, scaledHeight} , lse if type=CYLINDRICAL or type=CONICAL: {height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians} |
If type = FLAT
, geometry:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
, geometry:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
Description
This event is emitted by xrweb
when an image target is first found.
imagefound.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
AFRAME.registerComponent('my-named-image-target', {
schema: {
name: { type: 'string' }
},
init: function () {
const object3D = this.el.object3D
const name = this.data.name
object3D.visible = false
const showImage = ({detail}) => {
if (name != detail.name) {
return
}
object3D.position.copy(detail.position)
object3D.quaternion.copy(detail.rotation)
object3D.scale.set(detail.scale, detail.scale, detail.scale)
object3D.visible = true
}
const hideImage = ({detail}) => {
if (name != detail.name) {
return
}
object3D.visible = false
}
this.el.sceneEl.addEventListener('xrimagefound', showImage)
this.el.sceneEl.addEventListener('xrimageupdated', showImage)
this.el.sceneEl.addEventListener('xrimagelost', hideImage)
}
})
Description
This event is emitted by xrweb
when an image target changes position, rotation or scale.
imageupdated.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
AFRAME.registerComponent('my-named-image-target', {
schema: {
name: { type: 'string' }
},
init: function () {
const object3D = this.el.object3D
const name = this.data.name
object3D.visible = false
const showImage = ({detail}) => {
if (name != detail.name) {
return
}
object3D.position.copy(detail.position)
object3D.quaternion.copy(detail.rotation)
object3D.scale.set(detail.scale, detail.scale, detail.scale)
object3D.visible = true
}
const hideImage = ({detail}) => {
if (name != detail.name) {
return
}
object3D.visible = false
}
this.el.sceneEl.addEventListener('xrimagefound', showImage)
this.el.sceneEl.addEventListener('xrimageupdated', showImage)
this.el.sceneEl.addEventListener('xrimagelost', hideImage)
}
})
Description
This event is emitted by xrweb
when an image target is no longer being tracked.
imagelost.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
AFRAME.registerComponent('my-named-image-target', {
schema: {
name: { type: 'string' }
},
init: function () {
const object3D = this.el.object3D
const name = this.data.name
object3D.visible = false
const showImage = ({detail}) => {
if (name != detail.name) {
return
}
object3D.position.copy(detail.position)
object3D.quaternion.copy(detail.rotation)
object3D.scale.set(detail.scale, detail.scale, detail.scale)
object3D.visible = true
}
const hideImage = ({detail}) => {
if (name != detail.name) {
return
}
object3D.visible = false
}
this.el.sceneEl.addEventListener('xrimagefound', showImage)
this.el.sceneEl.addEventListener('xrimageupdated', showImage)
this.el.sceneEl.addEventListener('xrimagelost', hideImage)
}
})
Description
This event is emitted by xrface
when when loading begins for additional face AR resources.
xrfaceloading.detail : {maxDetections, pointsPerDetection, indices, uvs}
Property | Description |
---|---|
maxDetections | The maximum number of faces that can be simultaneously processed. |
pointsPerDetection | Number of vertices that will be extracted per face. |
indices: [{a, b, c}] | Indexes into the vertices array that form the triangles of the requested mesh, as specified with meshGeometry on configure. |
uvs: [{u, v}] | uv positions into a texture map corresponding to the returned vertex points. |
const initMesh = ({detail}) => {
const {pointsPerDetection, uvs, indices} = detail
this.el.object3D.add(generateMeshGeometry({pointsPerDetection, uvs, indices}))
}
this.el.sceneEl.addEventListener('xrfaceloading', initMesh)
Description
This event is emitted by xrface
when all face AR resources have been loaded and scanning has begun.
xrfacescanning.detail : {maxDetections, pointsPerDetection, indices, uvs}
Property | Description |
---|---|
maxDetections | The maximum number of faces that can be simultaneously processed. |
pointsPerDetection | Number of vertices that will be extracted per face. |
indices: [{a, b, c}] | Indexes into the vertices array that form the triangles of the requested mesh, as specified with meshGeometry on configure. |
uvs: [{u, v}] | uv positions into a texture map corresponding to the returned vertex points. |
const initMesh = ({detail}) => {
const {pointsPerDetection, uvs, indices} = detail
this.el.object3D.add(generateMeshGeometry({pointsPerDetection, uvs, indices}))
}
this.el.sceneEl.addEventListener('xrfacescanning', initMesh)
Description
This event is emitted by xrface
when a face is first found.
xrfacefound.detail : {id, transform, vertices, normals, attachmentPoints}
Property | Description |
---|---|
id | A numerical id of the located face. |
transform: {position, rotation, scale, scaledWidth, scaledHeight, scaledDepth} |
Transform information of the located face. |
vertices: [{x, y, z}] | Position of face points, relative to transform. |
normals: [{x, y, z}] | Normal direction of vertices, relative to transform. |
attachmentPoints: { name, position: {x,y,z} } | See XR8.FaceController.AttachmentPoints for list of available attachment points. position is relative to the transform. |
transform
is an object with the following properties:
Property | Description |
---|---|
position {x, y, z} | The 3d position of the located face. |
rotation {w, x, y, z} | The 3d local orientation of the located face. |
scale | A scale factor that should be applied to objects attached to this face. |
scaledWidth | Approximate width of the head in the scene when multiplied by scale. |
scaledHeight | Approximate height of the head in the scene when multiplied by scale. |
scaledDepth | Approximate depth of the head in the scene when multiplied by scale. |
const faceRigidComponent = {
init: function () {
const object3D = this.el.object3D
object3D.visible = false
const show = ({detail}) => {
const {position, rotation, scale} = detail.transform
object3D.position.copy(position)
object3D.quaternion.copy(rotation)
object3D.scale.set(scale, scale, scale)
object3D.visible = true
}
const hide = ({detail}) => { object3D.visible = false }
this.el.sceneEl.addEventListener('xrfacefound', show)
this.el.sceneEl.addEventListener('xrfaceupdated', show)
this.el.sceneEl.addEventListener('xrfacelost', hide)
}
}
Description
This event is emitted by xrface
when face is subsequently found.
xrfaceupdated.detail : {id, transform, vertices, normals, attachmentPoints}
Property | Description |
---|---|
id | A numerical id of the located face. |
transform: {position, rotation, scale, scaledWidth, scaledHeight, scaledDepth} |
Transform information of the located face. |
vertices: [{x, y, z}] | Position of face points, relative to transform. |
normals: [{x, y, z}] | Normal direction of vertices, relative to transform. |
attachmentPoints: { name, position: {x,y,z} } | See XR8.FaceController.AttachmentPoints for list of available attachment points. position is relative to the transform. |
transform
is an object with the following properties:
Property | Description |
---|---|
position {x, y, z} | The 3d position of the located face. |
rotation {w, x, y, z} | The 3d local orientation of the located face. |
scale | A scale factor that should be applied to objects attached to this face. |
scaledWidth | Approximate width of the head in the scene when multiplied by scale. |
scaledHeight | Approximate height of the head in the scene when multiplied by scale. |
scaledDepth | Approximate depth of the head in the scene when multiplied by scale. |
const faceRigidComponent = {
init: function () {
const object3D = this.el.object3D
object3D.visible = false
const show = ({detail}) => {
const {position, rotation, scale} = detail.transform
object3D.position.copy(position)
object3D.quaternion.copy(rotation)
object3D.scale.set(scale, scale, scale)
object3D.visible = true
}
const hide = ({detail}) => { object3D.visible = false }
this.el.sceneEl.addEventListener('xrfacefound', show)
this.el.sceneEl.addEventListener('xrfaceupdated', show)
this.el.sceneEl.addEventListener('xrfacelost', hide)
}
}
Description
This event is emitted by xrface
when a face is no longer being tracked.
xrfacelost.detail : {id}
Property | Description |
---|---|
id | A numerical id of the face that was lost. |
const faceRigidComponent = {
init: function () {
const object3D = this.el.object3D
object3D.visible = false
const show = ({detail}) => {
const {position, rotation, scale} = detail.transform
object3D.position.copy(position)
object3D.quaternion.copy(rotation)
object3D.scale.set(scale, scale, scale)
object3D.visible = true
}
const hide = ({detail}) => { object3D.visible = false }
this.el.sceneEl.addEventListener('xrfacefound', show)
this.el.sceneEl.addEventListener('xrfaceupdated', show)
this.el.sceneEl.addEventListener('xrfacelost', hide)
}
}
This section describes the events that are listened for by the "xrweb" A-Frame component
You can emit these events in your web application to perform various actions:
Event Listener | Description |
---|---|
hidecamerafeed | Hides the camera feed. Tracking does not stop. |
recenter | Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter. |
screenshotrequest | Emits a request to the engine to capture a screenshot of the AFrame canvas. The engine will emit a screenshotready event with the JPEG compressed image or screenshoterror if an error has occured. |
showcamerafeed | Shows the camera feed. |
stopxr | Stop the current XR session. While stopped, the camera feed is stopped and device motion is not tracked. |
scene.emit('hidecamerafeed')
Parameters
None
Description
Hides the camera feed. Tracking does not stop.
let scene = this.el.sceneEl
scene.emit('hidecamerafeed')
scene.emit('recenter', {origin, facing})
Parameters
Parameter | Description |
---|---|
origin: {x, y, z} [Optional] | The location of the new origin. |
facing: {w, x, y, z} [Optional] | A quaternion representing direction the camera should face at the origin. |
Description
Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.
If origin and facing are not provided, camera is reset to origin previously specified by a call to recenter or the last call to updateCameraProjectionMatrix(). Note: with A-Frame, updateCameraProjectionMatrix() initially gets called based on initial camera position in the scene.
let scene = this.el.sceneEl
scene.emit('recenter')
// OR
let scene = this.el.sceneEl
scene.emit('recenter', {
origin: {x: 1, y: 4, z: 0},
facing: {w: 0.9856, x:0, y:0.169, z:0}
})
scene.emit('screenshotrequest')
Parameters
None
Description
Emits a request to the engine to capture a screenshot of the AFrame canvas. The engine will emit a screenshotready event with the JPEG compressed image or screenshoterror if an error has occured.
const scene = this.el.sceneEl
const photoButton = document.getElementById('photoButton')
// Emit screenshotrequest when user taps
photoButton.addEventListener('click', () => {
image.src = ""
scene.emit('screenshotrequest')
})
scene.addEventListener('screenshotready', event => {
image.src = 'data:image/jpeg;base64,' + event.detail
})
scene.addEventListener('screenshoterror', event => {
console.log("error")
})
scene.emit('showcamerafeed')
Parameters
None
Description
Shows the camera feed.
let scene = this.el.sceneEl
scene.emit('showcamerafeed')
scene.emit('stopxr')
Parameters
None
Description
Stop the current XR session. While stopped, the camera feed is stopped and device motion is not tracked.
let scene = this.el.sceneEl
scene.emit('stopxr')
Babylon.js (https://www.babylonjs.com/) is a complete JavaScript framework for building 3D games and experiences with HTML5 and WebGL. Combined with 8th Wall Web, you can create powerful Web AR experiences.
Tutorial Video:
Description
Provides an integration that interfaces with the BabylonJS environment and lifecyle to drive the Babylon.js camera to do virtual overlays.
Functions
Function | Description |
---|---|
xrCameraBehavior | Get a behavior that can be attached to a Babylon camera to run World Tracking and/or Image Targets. |
faceCameraBehavior | Get a behavior that can be attached to a Babylon camera to run Face Effects. |
XR8.Babylonjs.faceCameraBehavior(config, faceConfig)
Description
Get a behavior that can be attached to a Babylon camera like so: camera.addBehavior(XR8.Babylonjs.faceCameraBehavior())
Parameters
Parameter | Description |
---|---|
config [Optional] | Configuration parameters to pass to XR8.run() |
faceConfig [Optional] | Face configuration parameters to pass to XR8.FaceController |
config
[Optional] is an object with the following properties:
Property | Type | Default | Description |
---|---|---|---|
webgl2 [Optional] | bool | false | If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1. |
ownRunLoop [Optional] | bool | true | If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only] |
cameraConfig: {direction} [Optional] | object | {direction: XR8.XrConfig.camera().BACK} |
Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT |
glContextConfig [Optional] | WebGLContextAttributes | null | The attributes to configure the WebGL canvas context. |
allowedDevices [Optional] | XR8.XrConfig.device() | XR8.XrConfig.device().MOBILE |
Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY , always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE . |
faceConfig
[Optional] is an object with the following properties:
Parameter | Description |
---|---|
nearClip [Optional] | The distance from the camera of the near clip plane. By default it will use the Babylon camera.minZ |
farClip [Optional] | The distance from the camera of the far clip plane. By default it will use the Babylon camera.maxZ |
meshGeometry [Optional] | List that contains which parts of the head geometry are visible. Options are: [XR8.FaceController.MeshGeometry.FACE, XR8.FaceController.MeshGeometry.EYES, XR8.FaceController.MeshGeometry.NOSE,] . The default is [XR8.FaceController.MeshGeometry.FACE] |
imageTargets [Optional] | List of names of the image target to detect. Can be modified at runtime. Note: All currently active image targets will be replaced with the ones specified in this list. |
leftHandedAxes [Optional] | If true, use left-handed coordinates. |
imageTargets [Optional] | If true, flip left and right in the output. |
Returns
A Babylon JS behavior that connects the Face Effects engine to the Babylon camera and starts the camera feed and tracking.
const startScene = (canvas) => {
const engine = new BABYLON.Engine(canvas, true /* antialias */)
const scene = new BABYLON.Scene(engine)
scene.useRightHandedSystem = false
const camera = new BABYLON.FreeCamera('camera', new BABYLON.Vector3(0, 0, 0), scene)
camera.rotation = new BABYLON.Vector3(0, scene.useRightHandedSystem ? Math.PI : 0, 0)
camera.minZ = 0.0001
camera.maxZ = 10000
// Add a light to the scene
const directionalLight =
new BABYLON.DirectionalLight("DirectionalLight", new BABYLON.Vector3(-5, -10, 7), scene)
directionalLight.intensity = 0.5
// Mesh logic
const faceMesh = new BABYLON.Mesh("face", scene);
const material = new BABYLON.StandardMaterial("boxMaterial", scene)
material.diffuseColor = new BABYLON.Color3(173 / 255.0, 80 / 255.0, 255 / 255.0)
faceMesh.material = material
let facePoints = []
const runConfig = {
cameraConfig: {XR8.XrConfig.camera().FRONT},
allowedDevices: XR8.XrConfig.device().ANY,
verbose: true,
}
camera.addBehavior(XR8.Babylonjs.faceCameraBehavior(runConfig)) // Connect camera to XR and show camera feed.
engine.runRenderLoop(() => {
scene.render()
})
}
XR8.Babylonjs.xrCameraBehavior(config, xrConfig)
Description
Get a behavior that can be attached to a Babylon camera like so: camera.addBehavior(XR8.Babylonjs.xrCameraBehavior())
Parameters
Parameter | Description |
---|---|
config [Optional] | Configuration parameters to pass to XR8.run() |
xrConfig [Optional] | Configuration parameters to pass to XR8.XrController |
config
[Optional] is an object with the following properties:
Property | Type | Default | Description |
---|---|---|---|
webgl2 [Optional] | bool | false | If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1. |
ownRunLoop [Optional] | bool | false | If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only] |
cameraConfig: {direction} [Optional] | object | {direction: XR8.XrConfig.camera().BACK} |
Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT |
glContextConfig [Optional] | WebGLContextAttributes | null | The attributes to configure the WebGL canvas context. |
allowedDevices [Optional] | XR8.XrConfig.device() | XR8.XrConfig.device().MOBILE |
Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY , always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE . |
xrConfig
[Optional] is an object with the following properties:
Parameter | Description |
---|---|
enableLighting [Optional] | If true, return an estimate of lighting information. |
enableWorldPoints [Optional] | If true, return the map points used for tracking. |
disableWorldTracking [Optional] | If true, turn off SLAM tracking for efficiency. |
imageTargets [Optional] | List of names of the image target to detect. Can be modified at runtime. Note: All currently active image targets will be replaced with the ones specified in this list. |
leftHandedAxes [Optional] | If true, use left-handed coordinates. |
imageTargets [Optional] | If true, flip left and right in the output. |
Returns
A Babylon JS behavior that connects the XR engine to the Babylon camera and starts the camera feed and tracking.
let surface, engine, scene, camera
const startScene = () => {
const canvas = document.getElementById('renderCanvas')
engine = new BABYLON.Engine(canvas, true, { stencil: true, preserveDrawingBuffer: true })
engine.enableOfflineSupport = false
scene = new BABYLON.Scene(engine)
camera = new BABYLON.FreeCamera('camera', new BABYLON.Vector3(0, 3, 0), scene)
initXrScene({ scene, camera }) // Add objects to the scene and set starting camera position.
// Connect the camera to the XR engine and show camera feed
camera.addBehavior(XR8.Babylonjs.xrCameraBehavior())
engine.runRenderLoop(() => {
scene.render()
})
window.addEventListener('resize', () => {
engine.resize()
})
}
Image Target Observables
onXrImageLoadingObservable: Fires when detection image loading begins.
onXrImageLoadingObservable : { imageTargets: {name, type, metadata} }
onXrImageScanningObservable: Fires when all detection images have been loaded and scanning has begun.
onXrImageScanningObservable : { imageTargets: {name, type, metadata, geometry} }
onXrImageFoundObservable: Fires when an image target is first found.
onXrImageFoundObservable : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
onXrImageUpdatedObservable: Fires when an image target changes position, rotation or scale.
onXrImageUpdatedObservable : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
onXrImageLostObservable: Fires when an image target is no longer being tracked.
onXrImageLostObservable : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Face Effects Observables
onFaceLoadingObservable: Fires when loading begins for additional face AR resources.
onFaceLoadingObservable : {maxDetections, pointsPerDetection, indices, uvs}
onFaceScanningObservable: Fires when all face AR resources have been loaded and scanning has begun.
onFaceScanningObservable: {maxDetections, pointsPerDetection, indices, uvs}
onFaceFoundObservable: Fires when a face is first found.
onFaceFoundObservable : {id, transform, attachmentPoints, vertices, normals}
onFaceUpdatedObservable: Fires when a face is subsequently found.
onFaceUpdatedObservable : {id, transform, attachmentPoints, vertices, normals}
onFaceLostObservable: Fires when a face is no longer being tracked.
onFaceLostObservable : {id}
scene.onXrImageUpdatedObservable.add(e => {
target.position.copyFrom(e.position)
target.rotationQuaternion.copyFrom(e.rotation)
target.scaling.set(e.scale, e.scale, e.scale)
})
// this is called when the face is first found. It provides the static information about the
// face such as the UVs and indices
scene.onFaceLoadingObservable.add((event) => {
const {indices, maxDetections, pointsPerDetection, uvs} = event
// Babylon expects all vertex data to be a flat list of numbers
facePoints = Array(pointsPerDetection)
for (let i = 0; i < pointsPerDetection; i++) {
const facePoint = BABYLON.MeshBuilder.CreateBox("box", {size: 0.02}, scene)
facePoint.material = material
facePoint.parent = faceMesh
facePoints[i] = facePoint
}
})
// this is called each time the face is updated which is on a per-frame basis
scene.onFaceUpdatedObservable.add((event) => {
const {vertices, normals, transform} = event;
const {scale, position, rotation} = transform
vertices.map((v, i) => {
facePoints[i].position.x = v.x
facePoints[i].position.y = v.y
facePoints[i].position.z = v.z
})
faceMesh.scalingDeterminant = scale
faceMesh.position = position
faceMesh.rotationQuaternion = rotation
})
8th Wall camera applications are built using a camera pipeline module framework. Applications install modules which then control the behavior of the application at runtime.
Refer to XR8.addCameraPipelineModule() for details on adding camera pipeline modules to your application.
A camera pipeline module object must have a .name string which is unique within the application. It should implement one or more of the following camera lifecycle methods. These methods will be executed at the appropriate point in the run loop.
During the main runtime of an application, each camera frame goes through the following cycle:
onBeforeRun -> onCameraStatusChange (requesting -> hasStream -> hasVideo | failed) -> onStart -> onAttach -> onProcessGpu -> onProcessCpu -> onUpdate -> onRender
Camera modules should implement one or more of the following camera lifecycle methods:
Function | Description |
---|---|
onAppResourcesLoaded | Called when we have received the resources attached to an app from the server. |
onAttach | Called before the first time a module receives frame updates. It is called on modules that were added either before or after the pipeline is running. |
onBeforeRun | Called immediately after XR8.run(). If any promises are returned, XR will wait on all promises before continuing. |
onCameraStatusChange | Called when a change occurs during the camera permissions request. |
onCanvasSizeChange | Called when the canvas changes size. |
onDetach | Called after the last time a module receives frame updates. This is either after stop is called, or after the module is manually removed from the pipeline. |
onDeviceOrientationChange | Called when the device changes landscape/portrait orientation. |
onException | Called when an error occurs in XR. Called with the error object. |
onPaused | Called when XR8.pause() is called. |
onProcessCpu | Called to read results of GPU processing and return usable data. |
onProcessGpu | Called to start GPU processing. |
onRender | Called after onUpdate. This is the time for the rendering engine to issue any WebGL drawing commands. If an application is providing its own run loop and is relying on XR8.runPreRender() and XR8.runPostRender(), this method is not called and all rendering must be coordinated by the external run loop. |
onResume | Called when XR8.resume() is called. |
onStart | Called when XR starts. First callback after XR8.run() is called. |
onUpdate | Called to update the scene before render. Data returned by modules in onProcessGpu and onProcessCpu will be present as processGpu.modulename and processCpu.modulename where the name is given by module.name = "modulename". |
onVideoSizeChange | Called when the canvas changes size. |
requiredPermissions | Modules can indicate what browser capabilities they require that may need permissions requests. These can be used by the framework to request appropriate permissions if absent, or to create components that request the appropriate permissions before running XR. |
Note: Camera modules that implement onProcessGpu or onProcessCpu can provide data to subsequent stages of the pipeline. This is done by the module's name.
onAppResourcesLoaded: ({ framework, imageTargets, version })
Description
Called when we have received the resources attached to an app from the server.
Parameters
Parameter | Description |
---|---|
framework | The framework bindings for this module for dispatching events. |
imageTargets [Optional] | An array of image targets with the fields {imagePath, metadata, name} |
version | The engine version, e.g. 14.0.8.949 |
XR8.addCameraPipelineModule({
name = 'myPipelineModule',
onAppResourcesLoaded = ({ framework, version, imageTargets }) => {
//...
},
})
onAttach: ({framework, canvas, GLctx, computeCtx, isWebgl2, orientation, videoWidth, videoHeight, canvasWidth, canvasHeight, status, stream, video, version, imageTargets, config})
Description
onAttach()
is called before the first time a module receives frame updates. It is called on modules that were added either before or after the pipeline is running. It includes all the most recent data available from:
Parameters
Parameter | Description |
---|---|
framework | The framework bindings for this module for dispatching events. |
canvas | The canvas that backs GPU processing and user display. |
GLctx | The drawing canvas's WebGLRenderingContext or WebGL2RenderingContext. |
computeCtx | The compute canvas's WebGLRenderingContext or WebGL2RenderingContext. |
isWebgl2 | True if GLCtx is a WebGL2RenderingContext. |
orientation | The rotation of the ui from portrait, in degrees (-90, 0, 90, 180). |
videoWidth | The height of the camera feed, in pixels. |
videoHeight | The height of the camera feed, in pixels. |
canvasWidth | The width of the GLctx canvas, in pixels. |
canvasHeight | The height of the GLctx canvas, in pixels. |
status | One of [ 'requesting' , 'hasStream' , 'hasVideo' , 'failed' ] |
stream | The MediaStream associated with the camera feed. |
video | The video dom element displaying the stream. |
version [Optional] | The engine version, e.g. 14.0.8.949, if app resources are loaded. |
imageTargets [Optional] | An array of image targets with the fields {imagePath, metadata, name} |
config | The configuration parameters that were passed to XR8.run(). |
onCameraStatusChange: ({ status, stream, video, config })
Description
Called when a change occurs during the camera permissions request.
Called with the status, and, if applicable, a reference to the newly available data. The typical status flow will be:
requesting -> hasStream -> hasVideo.
Parameters
Parameter | Description |
---|---|
status | One of [ 'requesting' , 'hasStream' , 'hasVideo' , 'failed' ] |
stream: [Optional] | The MediaStream associated with the camera feed, if status is hasStream. |
video: [Optional] | The video DOM element displaying the stream, if status is hasVideo. |
config | The configuration parameters that were passed to XR8.run(), if status is "requesting". |
The status
parameter has the following states:
State | Description |
---|---|
requesting | In 'requesting', the browser is opening the camera, and if applicable, checking the user permissons. In this state, it is appropriate to display a prompt to the user to accept camera permissions. |
hasStream | Once the user permissions are granted and the camera is successfully opened, the status switches to 'hasStream' and any user prompts regarding permissions can be dismissed. |
hasVideo | Once camera frame data starts to be available for processing, the status switches to 'hasVideo', and the camera feed can begin displaying. |
failed | If the camera feed fails to open, the status is 'failed'. In this case it's possible that the user has denied permissions, and so helping them to re-enable permissions is advisable. |
XR8.addCameraPipelineModule({
name = 'camerastartupmodule',
onCameraStatusChange = ({status}) => {
if (status == 'requesting') {
myApplication.showCameraPermissionsPrompt()
} else if (status == 'hasStream') {
myApplication.dismissCameraPermissionsPrompt()
} else if (status == 'hasVideo') {
myApplication.startMainApplictation()
} else if (status == 'failed') {
myApplication.promptUserToChangeBrowserSettings()
}
},
})
onCanvasSizeChange: ({ GLctx, computeCtx, videoWidth, videoHeight, canvasWidth, canvasHeight })
Description
Called when the canvas changes size. Called with dimensions of video and canvas.
Parameters
Parameter | Description |
---|---|
GLctx | The drawing canvas's WebGLRenderingContext or WebGL2RenderingContext. |
computeCtx | The compute canvas's WebGLRenderingContext or WebGL2RenderingContext. |
videoWidth | The width of the camera feed, in pixels. |
videoHeight | The height of the camera feed, in pixels. |
canvasWidth | The width of the GLctx canvas, in pixels. |
canvasHeight | The height of the GLctx canvas, in pixels. |
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onCanvasSizeChange: ({ GLctx, videoWidth, videoHeight, canvasWidth, canvasHeight }) => {
myHandleResize({ GLctx, videoWidth, videoHeight, canvasWidth, canvasHeight })
},
})
onDetach: ({framework})
Description
onDetach
is called after the last time a module receives frame updates. This is either after stop is called, or after the module is manually removed from the pipeline.
Parameters
Parameter | Description |
---|---|
framework | The framework bindings for this module for dispatching events. |
onDeviceOrientationChange: ({ GLctx, computeCtx, videoWidth, videoHeight, orientation })
Description
Called when the device changes landscape/portrait orientation.
Parameters
Parameter | Description |
---|---|
GLctx | The drawing canvas's WebGLRenderingContext or WebGL2RenderingContext. |
computeCtx | The compute canvas's WebGLRenderingContext or WebGL2RenderingContext. |
videoWidth | The width of the camera feed, in pixels. |
videoHeight | The height of the camera feed, in pixels. |
orientation | The rotation of the ui from portrait, in degrees (-90, 0, 90, 180). |
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onDeviceOrientationChange: ({ GLctx, videoWidth, videoHeight, orientation }) => {
// handleResize({ GLctx, videoWidth, videoHeight, orientation })
},
})
onException: (error)
Description
Called when an error occurs in XR. Called with the error object.
Parameters
Parameter | Description |
---|---|
error | The error object that was thrown |
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onException : (error) => {
console.error('XR threw an exception', error)
},
})
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onPaused: () => {
console.log('pausing application')
},
})
onProcessGpu: ({ framework, frameStartResult })
Description
Called to start GPU processing
Parameters
Parameter | Description |
---|---|
framework | { dispatchEvent(eventName, detail) } : Emits a named event with the supplied detail. |
frameStartResult | { cameraTexture, computeTexture, GLctx, computeCtx, textureWidth, textureHeight, orientation, videoTime, repeatFrame } |
The frameStartResult
parameter has the following properties:
Property | Description |
---|---|
cameraTexture | The drawing canvas's WebGLTexture containing camera feed data. |
computeTexture | The compute canvas's WebGLTexture containing camera feed data. |
GLctx | The drawing canvas's WebGLRenderingContext or WebGL2RenderingContext. |
computeCtx | The compute canvas's WebGLRenderingContext or WebGL2RenderingContext. |
textureWidth | The width (in pixels) of the camera feed texture. |
textureHeight | The height (in pixels) of the camera feed texture. |
orientation | The rotation of the ui from portrait, in degrees (-90, 0, 90, 180). |
videoTime | The timestamp of this video frame. |
repeatFrame | True if the camera feed has not updated since the last call. |
Returns
Any data that you wish to provide to onProcessCpu and onUpdate should be returned. It will be provided to those methods as processGpuResult.modulename
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onProcessGpu: ({frameStartResult}) => {
const {cameraTexture, GLctx, textureWidth, textureHeight} = frameStartResult
if(!cameraTexture.name){
console.error("[index] Camera texture does not have a name")
}
const restoreParams = XR8.GlTextureRenderer.getGLctxParameters(GLctx, [GLctx.TEXTURE0])
// Do relevant GPU processing here
...
XR8.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)
// These fields will be provided to onProcessCpu and onUpdate
return {gpuDataA, gpuDataB}
},
})
onProcessCpu: ({ framework, frameStartResult, processGpuResult })
Description
Called to read results of GPU processing and return usable data. Called with { frameStartResult, processGpuResult }
. Data returned by modules in onProcessGpu will be present as processGpu.modulename
where the name is given by module.name = "modulename".
Parameter | Description |
---|---|
framework | The framework bindings for this module for dispatching events. |
frameStartResult | The data that was provided at the beginning of a frame. |
processGpuResult | Data returned by all installed modules during onProcessGpu. |
Returns
Any data that you wish to provide to onUpdate should be returned. It will be provided to that method as processCpuResult.modulename
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onProcessCpu: ({ frameStartResult, processGpuResult }) => {
const GLctx = frameStartResult.GLctx
const { cameraTexture } = frameStartResult
const { camerapixelarray, mycamerapipelinemodule } = processGpuResult
// Do something interesting with mycamerapipelinemodule.gpuDataA and mycamerapipelinemodule.gpuDataB
...
// These fields will be provided to onUpdate
return {cpuDataA, cpuDataB}
},
})
onRender: ()
Description
Called after onUpdate. This is the time for the rendering engine to issue any WebGL drawing commands. If an application is providing its own run loop and is relying on XR8.runPreRender() and XR8.runPostRender(), this method is not called and all rendering must be coordinated by the external run loop.
Parameters
None
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onRender: () => {
// This is already done by XR8.Threejs.pipelineModule() but is provided here as an illustration.
XR8.Threejs.xrScene().renderer.render()
},
})
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onResume: () => {
console.log('resuming application')
},
})
onStart: ({ canvas, GLctx, computeCtx, isWebgl2, orientation, videoWidth, videoHeight, canvasWidth, canvasHeight, config })
Description
Called when XR starts.
Parameters
Parameter | Description |
---|---|
canvas | The canvas that backs GPU processing and user display. |
GLctx | The drawing canvas's WebGLRenderingContext or WebGL2RenderingContext. |
computeCtx | The compute canvas's WebGLRenderingContext or WebGL2RenderingContext. |
isWebgl2 | True if GLCtx is a WebGL2RenderingContext. |
orientation | The rotation of the ui from portrait, in degrees (-90, 0, 90, 180). |
videoWidth | The height of the camera feed, in pixels. |
videoHeight | The height of the camera feed, in pixels. |
canvasWidth | The width of the GLctx canvas, in pixels. |
canvasHeight | The height of the GLctx canvas, in pixels. |
config | The configuration parameters that were passed to XR8.run(). |
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onStart: ({canvasWidth, canvasHeight}) => {
// Get the 3js scene. This was created by XR8.Threejs.pipelineModule().onStart(). The
// reason we can access it here now is because 'mycamerapipelinemodule' was installed after
// XR8.Threejs.pipelineModule().
const {scene, camera} = XR8.Threejs.xrScene()
// Add some objects to the scene and set the starting camera position.
myInitXrScene({scene, camera})
// Sync the xr controller's 6DoF position and camera paremeters with our scene.
XR8.XrController.updateCameraProjectionMatrix({
origin: camera.position,
facing: camera.quaternion,
})
},
})
onUpdate: ({ framework, frameStartResult, processGpuResult, processCpuResult })
Description
Called to update the scene before render. Called with { framework, frameStartResult, processGpuResult, processCpuResult }
. Data returned by modules in onProcessGpu and onProcessCpu will be present as processGpu.modulename
and processCpu.modulename
where the name is given by module.name = "modulename".
Parameters
Parameter | Description |
---|---|
framework | The framework bindings for this module for dispatching events. |
frameStartResult | The data that was provided at the beginning of a frame. |
processGpuResult | Data returned by all installed modules during onProcessGpu. |
processCpuResult | Data returned by all installed modules during onProcessCpu. |
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onUpdate: ({ frameStartResult, processGpuResult, processCpuResult }) => {
if (!processCpuResult.reality) {
return
}
const {rotation, position, intrinsics} = processCpuResult.reality
const {cpuDataA, cpuDataB} = processCpuResult.mycamerapipelinemodule
// ...
},
})
onVideoSizeChange: ({ GLctx, computeCtx, videoWidth, videoHeight, canvasWidth, canvasHeight, orientation })
Description
Called when the canvas changes size. Called with dimensions of video and canvas as well as device orientation.
Parameters
Parameters | Description |
---|---|
GLctx | The drawing canvas's WebGLRenderingContext or WebGL2RenderingContext. |
computeCtx | The compute canvas's WebGLRenderingContext or WebGL2RenderingContext. |
videoWidth | The width of the camera feed, in pixels. |
videoHeight | The height of the camera feed, in pixels. |
canvasWidth | The width of the GLctx canvas, in pixels. |
canvasHeight | The height of the GLctx canvas, in pixels. |
orientation | The rotation of the ui from portrait, in degrees (-90, 0, 90, 180). |
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onVideoSizeChange: ({ GLctx, videoWidth, videoHeight, canvasWidth, canvasHeight }) => {
myHandleResize({ GLctx, videoWidth, videoHeight, canvasWidth, canvasHeight })
},
})
requiredPermissions: ([permissions])
Description
requiredPermissions
is used to define the list of permissions required by a pipeline module.
Parameters
Parameter | Description |
---|---|
permissions | An array of XR8.XrPermissions.permissions() required by the pipeline module. |
XR8.addCameraPipelineModule({
name: 'request-gyro',
requiredPermissions: () => ([XR8.XrPermissions.permissions().DEVICE_ORIENTATION]),
})
Description
Provides a camera pipeline module that gives access to camera data as a grayscale or color uint8 array.
Functions
Function | Description |
---|---|
pipelineModule | A pipeline module that provides the camera texture as an array of RGBA or grayscale pixel values that can be used for CPU image processing. |
XR8.CameraPixelArray.pipelineModule({ luminance, maxDimension, width, height })
Description
A pipeline module that provides the camera texture as an array of RGBA or grayscale pixel values that can be used for CPU image processing.
Parameters
Parameter | Default | Description |
---|---|---|
luminance [Optional] | false | If true, output grayscale instead of RGBA |
maxDimension: [Optional] | The size in pixels of the longest dimension of the output image. The shorter dimension will be scaled relative to the size of the camera input so that the image is resized without cropping or distortion. | |
width [Optional] | The width of the camera feed texture. | Width of the output image. Ignored if maxDimension is specified. |
height [Optional] | The height of the camera feed texture. | Height of the output image. Ignored if maxDimension is specified. |
Returns
Return value is an object made available to onProcessCpu and onUpdate as:
processGpuResult.camerapixelarray: {rows, cols, rowBytes, pixels}
Property | Description |
---|---|
rows | Height in pixels of the output image. |
cols | Width in pixels of the output image. |
rowBytes | Number of bytes per row of the output image. |
pixels | A UInt8Array of pixel data. |
srcTex | A texture containing the source image for the returned pixels. |
XR8.addCameraPipelineModule(XR8.CameraPixelArray.pipelineModule({ luminance: true }))
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onProcessCpu: ({ processGpuResult }) => {
const { camerapixelarray } = processGpuResult
if (!camerapixelarray || !camerapixelarray.pixels) {
return
}
const { rows, cols, rowBytes, pixels } = camerapixelarray
...
},
Description
Provides a camera pipeline module that can generate screenshots of the current scene.
Functions
Function | Description |
---|---|
configure | Configures the expected result of canvas screenshots. |
pipelineModule | Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started and when the canvas size has changed. |
setForegroundCanvas | Sets a foreground canvas to be displayed on top of the camera canvas. This must be the same dimensions as the camera canvas. |
takeScreenshot | Returns a Promise that when resolved, provides a buffer containing the JPEG compressed image. When rejected, an error message is provided. |
XR8.CanvasScreenshot.configure({ maxDimension, jpgCompression })
Description
Configures the expected result of canvas screenshots.
Parameters
Parameter | Default | Description |
---|---|---|
maxDimension: [Optional] | 1280 | The value of the largest expected dimension. |
jpgCompression: [Optional] | 75 | 1-100 value representing the JPEG compression quality. 100 is little to no loss, and 1 is a very low quality image. |
XR8.CanvasScreenshot.configure({ maxDimension: 640, jpgCompression: 50 })
XR8.CanvasScreenshot.pipelineModule()
Description
Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started and when the canvas size has changed.
Parameters
None
Returns
A CanvasScreenshot pipeline module that can be added via XR8.addCameraPipelineModule().
XR8.addCameraPipelineModule(XR8.CanvasScreenshot.pipelineModule())
XR8.CanvasScreenshot.setForegroundCanvas(canvas)
Description
Sets a foreground canvas to be displayed on top of the camera canvas. This must be the same dimensions as the camera canvas.
Only required if you use separate canvases for camera feed vs virtual objects.
Parameters
Parameter | Description |
---|---|
canvas | The canvas to use as a foreground in the screenshot |
const myOtherCanvas = document.getElementById('canvas2')
XR8.CanvasScreenshot.setForegroundCanvas(myOtherCanvas)
XR8.CanvasScreenshot.takeScreenshot({ onProcessFrame })
Description
Returns a Promise that when resolved, provides a buffer containing the JPEG compressed image. When rejected, an error message is provided.
Parameters
Parameter | Description |
---|---|
onProcessFrame [Optional] | Callback where you can implement additional drawing to the screenshot 2d canvas. |
XR8.addCameraPipelineModule(XR8.canvasScreenshot().cameraPipelineModule())
XR8.canvasScreenshot().takeScreenshot().then(
data => {
// myImage is an <img> HTML element
const image = document.getElementById('myImage')
image.src = 'data:image/jpeg;base64,' + data
},
error => {
console.log(error)
// Handle screenshot error.
})
})
Description
FaceController provides face detection and meshing, and interfaces for configuring tracking.
Functions
Function | Description |
---|---|
configure | Configures what processing is performed by FaceController. |
pipelineModule | Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started, camera proessing events, and other state changes. These are used to calculate the camera's position. |
AttachmentPoints | Points on the face you can anchor content to. |
MeshGeometry | Options for defining which portions of the face have mesh triangles returned. |
XR8.FaceController.configure({ nearClip, farClip, meshGeometry, coordinates })
Description
Configures what processing is performed by FaceController.
Parameters
Parameter | Description |
---|---|
nearClip [Optional] | The distance from the camera of the near clip plane. |
farClip [Optional] | The distance from the camera of the far clip plane. |
meshGeometry [Optional] | List that contains which parts of the head geometry are visible. Options are: [XR8.FaceController.MeshGeometry.FACE, XR8.FaceController.MeshGeometry.EYES, XR8.FaceController.MeshGeometry.NOSE,] . The default is [XR8.FaceController.MeshGeometry.FACE] |
coordinates [Optional] | {origin, scale, axes, mirroredDisplay} |
coordinates
[Optional] is an object with the following properties:
Parameter | Description |
---|---|
origin [Optional] | {position: {x, y, z}, rotation: {w, x, y, z}} of the camera. |
scale [Optional] | Scale of the scene. |
axes [Optional] | 'LEFT_HANDED' or 'RIGHT_HANDED' . Default is 'RIGHT_HANDED' |
mirroredDisplay [Optional] | If true, flip left and right in the output. |
IMPORTANT: FaceController and XrController cannot be used as the same time.
XR8.FaceController.configure({
meshGeometry: [XR8.FaceController.MeshGeometry.FACE],
coordinates: {
mirroredDisplay: true,
axes: 'RIGHT_HANDED',
},
})
XR8.FaceController.pipelineModule()
Parameters
None
Description
Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started, camera proessing events, and other state changes. These are used to calculate the camera's position.
Returns
Return value is an object made available to onUpdate as:
processCpuResult.reality: { rotation, position, intrinsics, cameraFeedTexture }
Property | Description |
---|---|
rotation: {w, x, y, z} |
The orientation (quaternion) of the camera in the scene. |
position: {x, y, z} |
The position of the camera in the scene. |
intrinsics | A column-major 4x4 projection matrix that gives the scene camera the same field of view as the rendered camera feed. |
cameraFeedTexture | The WebGLTexture containing camera feed data. |
Dispatched Events
faceloading: Fires when loading begins for additional face AR resources.
faceloading.detail : {maxDetections, pointsPerDetection, indices, uvs}
Property | Description |
---|---|
maxDetections | The maximum number of faces that can be simultaneously processed. |
pointsPerDetection | Number of vertices that will be extracted per face. |
indices: [{a, b, c}] | Indexes into the vertices array that form the triangles of the requested mesh, as specified with meshGeometry on configure. |
uvs: [{u, v}] | uv positions into a texture map corresponding to the returned vertex points. |
facescanning: Fires when all face AR resources have been loaded and scanning has begun.
facescanning.detail : {maxDetections, pointsPerDetection, indices, uvs}
Property | Description |
---|---|
maxDetections | The maximum number of faces that can be simultaneously processed. |
pointsPerDetection | Number of vertices that will be extracted per face. |
indices: [{a, b, c}] | Indexes into the vertices array that form the triangles of the requested mesh, as specified with meshGeometry on configure. |
uvs: [{u, v}] | uv positions into a texture map corresponding to the returned vertex points. |
facefound: Fires when a face first found.
facefound.detail : {id, transform, vertices, normals, attachmentPoints}
Property | Description |
---|---|
id | A numerical id of the located face. |
transform: {position, rotation, scale, scaledWidth, scaledHeight, scaledDepth} |
Transform information of the located face. |
vertices: [{x, y, z}] | Position of face points, relative to transform. |
normals: [{x, y, z}] | Normal direction of vertices, relative to transform. |
attachmentPoints: { name, position: {x,y,z} } | See XR8.FaceController.AttachmentPoints for list of available attachment points. position is relative to the transform. |
transform
is an object with the following properties:
Property | Description |
---|---|
position {x, y, z} | The 3d position of the located face. |
rotation {w, x, y, z} | The 3d local orientation of the located face. |
scale | A scale factor that should be applied to objects attached to this face. |
scaledWidth | Approximate width of the head in the scene when multiplied by scale. |
scaledHeight | Approximate height of the head in the scene when multiplied by scale. |
scaledDepth | Approximate depth of the head in the scene when multiplied by scale. |
faceupdated: Fires when a face is subsequently found.
faceupdated.detail : {id, transform, vertices, normals, attachmentPoints}
Property | Description |
---|---|
id | A numerical id of the located face. |
transform: {position, rotation, scale, scaledWidth, scaledHeight, scaledDepth} |
Transform information of the located face. |
vertices: [{x, y, z}] | Position of face points, relative to transform. |
normals: [{x, y, z}] | Normal direction of vertices, relative to transform. |
attachmentPoints: { name, position: {x,y,z} } | See XR8.FaceController.AttachmentPoints for list of available attachment points. position is relative to the transform. |
transform
is an object with the following properties:
Property | Description |
---|---|
position {x, y, z} | The 3d position of the located face. |
rotation {w, x, y, z} | The 3d local orientation of the located face. |
scale | A scale factor that should be applied to objects attached to this face. |
scaledWidth | Approximate width of the head in the scene when multiplied by scale. |
scaledHeight | Approximate height of the head in the scene when multiplied by scale. |
scaledDepth | Approximate depth of the head in the scene when multiplied by scale. |
facelost: Fires when a face is no longer being tracked.
facelost.detail : { id }
Property | Description |
---|---|
id | A numerical id of the face that was lost. |
XR8.addCameraPipelineModule(XR8.FaceController.pipelineModule())
Enumeration
Description
Points of the face you can anchor content to.
Properties
Property | Value | Description |
---|---|---|
FOREHEAD | forehead | Forehead |
RIGHT_EYEBROW_INNER | rightEyebrowInner | Inner side of right eyebrow |
RIGHT_EYEBROW_MIDDLE | rightEyebrowMiddle | Middle of right eyebrow |
RIGHT_EYEBROW_OUTER | rightEyebrowOuter | Outer side of right eyebrow |
LEFT_EYEBROW_INNER | leftEyebrowInner | Inner side of left eyebrow |
LEFT_EYEBROW_MIDDLE | leftEyebrowMiddle | Middle of left eyebrow |
LEFT_EYEBROW_OUTER | leftEyebrowOuter | Outer side of left eyebrow |
LEFT_EAR | leftEar | Left ear |
RIGHT_EAR | rightEar | Right ear |
LEFT_CHEEK | leftCheek | Left cheek |
RIGHT_CHEEK | rightCheek | Right cheek |
NOSE_BRIDGE | noseBridge | Bridge of the nose |
NOSE_TIP | noseTip | Tip of the nose |
LEFT_EYE | leftEye | Left eye |
RIGHT_EYE | rightEye | Right eye |
LEFT_EYE_OUTER_CORNER | leftEyeOuterCorner | Outer corner of left eye |
RIGHT_EYE_OUTER_CORNER | rightEyeOuterCorner | Outer corner of right eye |
UPPER_LIP | upperLip | Upper lip |
LOWER_LIP | lowerLip | Lower lip |
MOUTH | mouth | Mouth |
MOUTH_RIGHT_CORNER | mouthRightCorner | Right corner of mouth |
MOUTH_LEFT_CORNER | mouthLeftCorner | Left corner of mouth |
CHIN | chin | Chin |
Enumeration
Description
Options for defining which portions of the face have mesh triangles returned.
Properties
Property | Value | Description |
---|---|---|
FACE | face | Return geometry for the face. |
MOUTH | mouth | Return geometry for the mouth. |
EYES | eyes | Return geometry for the eyes. |
Description
Provides a camera pipeline module that draws the camera feed to a canvas as well as extra utilities for GL drawing operations.
Functions
Function | Description |
---|---|
configure | Configures the pipeline module that draws the camera feed to the canvas. |
create | Creates an object for rendering from a texture to a canvas or another texture. |
fillTextureViewport | Convenience method for getting a Viewport struct that fills a texture or canvas from a source without distortion. This is passed to the render method of the object created by GlTextureRenderer.create() |
getGLctxParameters | Gets the current set of WebGL bindings so that they can be restored later. |
pipelineModule | Creates a pipeline module that draws the camera feed to the canvas. |
setGLctxParameters | Restores the WebGL bindings that were saved with getGLctxParameters. |
setTextureProvider | Sets a provider that passes the texture to draw. |
XR8.GlTextureRenderer.configure({ vertexSource, fragmentSource, toTexture, flipY, mirroredDisplay })
Description
Configures the pipeline module that draws the camera feed to the canvas.
Parameters
Parameter | Description |
---|---|
vertexSource [Optional] | The vertex shader source to use for rendering. |
fragmentSource [Optional] | The fragment shader source to use for rendering. |
toTexture [Optional] | A WebGlTexture to draw to. If no texture is provided, drawing will be to the canvas. |
flipY [Optional] | If true, flip the rendering upside-down. |
mirroredDisplay [Optional] | If true, flip the rendering left-right. |
const purpleShader =
// Purple.
` precision mediump float;
varying vec2 texUv;
uniform sampler2D sampler;
void main() {
vec4 c = texture2D(sampler, texUv);
float y = dot(c.rgb, vec3(0.299, 0.587, 0.114));
vec3 p = vec3(.463, .067, .712);
vec3 p1 = vec3(1.0, 1.0, 1.0) - p;
vec3 rgb = y < .25 ? (y * 4.0) * p : ((y - .25) * 1.333) * p1 + p;
gl_FragColor = vec4(rgb, c.a);
}`
XR8.GlTextureRenderer.configure({fragmentSource: purpleShader})
XR8.GlTextureRenderer.create({ GLctx, vertexSource, fragmentSource, toTexture, flipY, mirroredDisplay })
Description
Creates an object for rendering from a texture to a canvas or another texture.
Parameters
Parameter | Description |
---|---|
GLctx | The WebGlRenderingContext (or WebGl2RenderingContext) to use for rendering. If no toTexture is specified, content will be drawn to this context's canvas. |
vertexSource [Optional] | The vertex shader source to use for rendering. |
fragmentSource [Optional] | The fragment shader source to use for rendering. |
toTexture [Optional] | A WebGlTexture to draw to. If no texture is provided, drawing will be to the canvas. |
flipY [Optional] | If true, flip the rendering upside-down. |
mirroredDisplay [Optional] | If true, flip the rendering left-right. |
Returns
Returns an object: {render, destroy, shader}
Property | Description |
---|---|
render({ renderTexture, viewport }) | A function that renders the renderTexture to the specified viewport. Depending on if toTexture is supplied, the viewport is either on the canvas that created GLctx, or it's relative to the render texture provided. |
destroy | Clean up resources associated with this GlTextureRenderer. |
shader | Gets a handle to the shader being used to draw the texture. |
The render
function has the following parameters:
Parameter | Description |
---|---|
renderTexture | A WebGlTexture (source) to draw. |
viewport | The region of the canvas or output texture to draw to; this can be constructed manually, or using GlTextureRenderer.fillTextureViewport(). |
The viewport is specified by { width, height, offsetX, offsetY }
:
Property | Description |
---|---|
width | The width (in pixels) to draw. |
height | The height (in pixels) to draw. |
offsetX [Optional] | The minimum x-coordinate (in pixels) to draw to. |
offsetY [Optional] | The minimum y-coordinate (in pixels) to draw to. |
XR8.GlTextureRenderer.fillTextureViewport(srcWidth, srcHeight, destWidth, destHeight)
Description
Convenience method for getting a Viewport struct that fills a texture or canvas from a source without distortion. This is passed to the render method of the object created by GlTextureRenderer.create()
Parameters
Parameter | Description |
---|---|
srcWidth | The width of the texture you are rendering. |
srcHeight | The height of the texture you are rendering. |
destWidth | The width of the render target. |
destHeight | The height of the render target. |
Returns
An object: { width, height, offsetX, offsetY }
Property | Description |
---|---|
width | The width (in pixels) to draw. |
height | The height (in pixels) to draw. |
offsetX | The minimum x-coordinate (in pixels) to draw to. |
offsetY | The minimum y-coordinate (in pixels) to draw to. |
XR8.GlTextureRenderer.getGLctxParameters(GLctx, textureUnit)
Description
Gets the current set of WebGL bindings so that they can be restored later.
Parameters
Parameter | Description |
---|---|
GLctx | The WebGLRenderingContext or WebGL2RenderingContext to get bindings from. |
textureunits | The texture units to preserve state for, e.g. [GLctx.TEXTURE0] |
Returns
A struct to pass to setGLctxParameters.
const restoreParams = XR8.GlTextureRenderer.getGLctxParameters(GLctx, [GLctx.TEXTURE0])
// Alter context parameters as needed
...
XR8.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)
// Context parameters are restored to their previous state
XR8.GlTextureRenderer.pipelineModule({ vertexSource, fragmentSource, toTexture, flipY })
Description
Creates a pipeline module that draws the camera feed to the canvas.
Parameters
Parameter | Description |
---|---|
vertexSource [Optional] | The vertex shader source to use for rendering. |
fragmentSource [Optional] | The fragment shader source to use for rendering. |
toTexture [Optional] | A WebGlTexture to draw to. If no texture is provided, drawing will be to the canvas. |
flipY [Optional] | If true, flip the rendering upside-down. |
Returns
Return value is an object {viewport, shader}
made available to onProcessCpu and onUpdate as:
processGpuResult.gltexturerenderer
with the following properties:
Property | Description |
---|---|
viewport | The region of the canvas or output texture to draw to; this can be constructed manually, or using GlTextureRenderer.fillTextureViewport(). |
shader | A handle to the shader being used to draw the texture. |
processGpuResult.gltexturerenderer.viewport: { width, height, offsetX, offsetY }
Property | Description |
---|---|
width | The width (in pixels) to draw. |
height | The height (in pixels) to draw. |
offsetX | The minimum x-coordinate (in pixels) to draw to. |
offsetY | The minimum y-coordinate (in pixels) to draw to. |
XR8.addCameraPipelineModule(XR8.GlTextureRenderer.pipelineModule())
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onProcessCpu: ({ processGpuResult }) => {
const {viewport, shader} = processGpuResult.gltexturerenderer
if (!viewport) {
return
}
const { width, height, offsetX, offsetY } = viewport
// ...
},
XR8.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)
Description
Restores the WebGL bindings that were saved with getGLctxParameters.
Parameters
Parameter | Description |
---|---|
GLctx | The WebGLRenderingContext or WebGL2RenderingContext to restore bindings on. |
restoreParams | The output of getGLctxParameters. |
const restoreParams = XR8.GlTextureRenderer.getGLctxParameters(GLctx, [GLctx.TEXTURE0])
// Alter context parameters as needed
...
XR8.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)
// Context parameters are restored to their previous state
XR8.GlTextureRenderer.setTextureProvider(({ frameStartResult, processGpuResult, processCpuResult }) => {} )
Description
Sets a provider that passes the texture to draw. This should be a function that take the same inputs as cameraPipelineModule.onUpdate.
Parameters
setTextureProvider()
takes a function with the following parameters:
Parameter | Description |
---|---|
frameStartResult | The data that was provided at the beginning of a frame. |
processGpuResult | Data returned by all installed modules during onProcessGpu. |
processCpuResult | Data returned by all installed modules during onProcessCpu. |
XR8.GlTextureRenderer.setTextureProvider(
({processGpuResult}) => {
return processGpuResult.camerapixelarray ? processGpuResult.camerapixelarray.srcTex : null
})
Description
Provides a camera pipeline module that allows you to record a video in MP4 format.
Functions
Function | Description |
---|---|
configure | Configure video recording settings. |
pipelineModule | Creates a pipeline module that records video in MP4 format. |
recordVideo | Start recording. |
requestMicrophone | Enables recording of audio (if not enabled automatically), requesting permissions if needed. |
stopRecording | Stop recording. |
RequestMicOptions | Enum for whether or not to automatically request microphone permissions. |
XR8.MediaRecorder.configure({ coverImageUrl, enableEndCard, endCardCallToAction, footerImageUrl, foregroundCanvas, maxDurationMs, maxDimension, shortLink, configureAudioOutput, audioContext, requestMic })
Description
Configures various MediaRecorder parameters.
Parameters
Parameter | Default | Description |
---|---|---|
coverImageUrl [Optional] | cover image configured in project, null otherwise | Image source for cover image. |
enableEndCard [Optional] | false | If true, enable end card. |
endCardCallToAction [Optional] | 'Try it at: ' | Sets the text string for call to action. |
footerImageUrl [Optional] | null | img src for cover image. |
foregroundCanvas [Optional] | null | The canvas to use as a foreground in the recorded video. |
maxDurationMs [Optional] | 15000 | Maximum duration of video, in milliseconds. |
maxDimension [Optional] | 1280 | Max dimension of the captured recording, in pixels. |
shortLink [Optional] | 8th.io shortlink from project dashboard | Sets the text string for shortlink. |
configureAudioOutput [Optional] | null | User provided function that will receive the microphoneInput and audioProcessor audio nodes for complete control of the recording's audio. The nodes attached to the audio processor node will be part of the recording's audio. It is required to return the end node of the user's audio graph. |
audioContext [Optional] | null | User provided AudioContext instance. Engines like THREE.js and BABYLON.js have their own internal audio instance. In order for the recordings to contains sounds defined in those engines, you'll want to provide their AudioContext instance. |
requestMic [Optional] | 'auto' | Determines when the audio permissions are requested. The options are provided in XR8.MediaRecorder.RequestMicOptions. |
The function passed to configureAudioOutput
takes an object with the following parameters:
Parameter | Description |
---|---|
microphoneInput | A GainNode that contains the user’s mic input. If the user’s permissions are not accepted, then this node won’t output the mic input but will still be present. |
audioProcessor | a ScriptProcessorNode that passes audio data to the recorder. If you want an audio node to be part of the recording’s audio output, then you must connect it to the audioProcessor. |
XR8.MediaRecorder.configure({
maxDurationMs: 15000,
enableEndCard: true,
endCardCallToAction: 'Try it at:',
shortLink: '8th.io/my-link',
})
const userConfiguredAudioOutput = ({microphoneInput, audioProcessor}) => {
const myCustomAudioGraph = ...
myCustomAudioSource.connect(myCustomAudioGraph)
microphoneInput.connect(myCustomAudioGraph)
// connect audio graph end node to hardware
myCustomAudioGraph.connect(microphoneInput.context.destination)
// audio graph will be automatically connected to processor
return myCustomAudioGraph
}
const threejsAudioContext = THREE.AudioContext.getContext()
XR8.MediaRecorder.configure({
configureAudioOutput: userConfiguredAudioOutput,
audioContext: threejsAudioContext,
requestMic: XR8.MediaRecorder.RequestMicOptions.AUTO,
})
XR8.MediaRecorder.pipelineModule()
Description
Provides a camera pipeline module that allows you to record a video in MP4 format.
Parameters
None
Returns
A MediaRecorder pipeline module module allows you to record a video.
XR8.addCameraPipelineModule(XR8.MediaRecorder.pipelineModule())
XR8.MediaRecorder.recordVideo({ onError, onProcessFrame, onStart, onStop, onVideoReady })
Description
Start recording.
This function takes an object that implements one of more of the following media recorder licecycle callback methods:
Parameters
Parameter | Description |
---|---|
onError | Callback when there is an error. |
onProcessFrame | Callback for adding an overlay to the video. |
onStart | Callback when recording has started. |
onStop | Callback when recording has stopped. |
onVideoReady | Callback when recording has completed and video is ready. |
XR8.MediaRecorder.recordVideo({
onVideoReady: (result) => window.dispatchEvent(new CustomEvent('recordercomplete', {detail: result})),
onStop: () => showLoading(),
onError: () => clearState(),
onProcessFrame: ({elapsedTimeMs, maxRecordingMs, ctx}) => {
// overlay some red text over the video
ctx.fillStyle = 'red'
ctx.font = '50px "Nunito"'
ctx.fillText(`${elapsedTimeMs}/${maxRecordingMs}`, 50, 50)
const timeLeft = ( 1 - elapsedTimeMs / maxRecordingMs)
// update the progress bar to show how much time is left
progressBar.style.strokeDashoffset = `${100 * timeLeft }`
},
})
XR8.MediaRecorder.requestMicrophone()
Description
Enables recording of audio (if not enabled automatically), requesting permissions if needed.
Returns a promise that lets the client know when the stream is ready. If you begin recording before the audio stream is ready, then you may miss the user's microphone output at the beginning of the recording.
Parameters
None
XR8.MediaRecorder.requestMicrophone()
.then(() => {
console.log('Microphone requested!')
})
.catch((err) => {
console.log('Hit an error: ', err)
})
XR8.MediaRecorder.stopRecording()
Description
Stop recording.
Parameters
None
XR8.MediaRecorder.stopRecording()
Enumeration
Description
Points of the face you can anchor content to.
Properties
Property | Value | Description |
---|---|---|
AUTO | auto | Automatically request microphone permissions in onAttach(). |
MANUAL | manual | Microphone permissions are NOT requested in onAttach(). Any other audio added to the app is still recorded if added to the AudioContext and connected to the audioProcessor provided to the user's configureAudioOutput function passed to XR8.MediaRecorder.configure(). You can request microphone permissions manually by calling XR8.MediaRecorder.requestMicrophone(). |
PlayCanvas (https://www.playcanvas.com/) is an open-source 3D game engine/interactive 3D application engine alongside a proprietary cloud-hosted creation platform that allows for simultaneous editing from multiple computers via a browser-based interface.
Description
Provides an integration that interfaces with the PlayCanvas environment and lifecyle to drive the PlayCanvas camera to do virtual overlays.
Functions
Function | Description |
---|---|
runXr | Opens the camera and starts running World Tracking and/or Image Tracking in a playcanvas scene. |
runFaceEffects | Opens the camera and starts running Face Effects in a playcanvas scene. |
stopXr | Remove the modules added in runXr and stop the camera. |
stopFaceEffects | Remove the modules added in runFaceEffects and stop the camera. |
To get started go to https://playcanvas.com/the8thwall and fork one of our sample projects:
AR World Tracking Starter Kit: An application to get you started quickly creating WebAR world tracking applications in PlayCanvas.
AR Image Tracking Starter Kit: An application to get you started quickly creating WebAR image tracking applications in PlayCanvas.
AR Face Effects Starter Kit: An application to get you started quickly creating Face Effects WebAR applications in PlayCanvas.
World Tracking and Face Effects: An example that illustrates how to switch between World Tracking and Face Effects in a single project.
Add your App Key
Go to Settings -> External Scripts
The following two scripts should be added added:
https://cdn.8thwall.com/web/xrextras/xrextras.js
https://apps.8thwall.com/xrweb?appKey=XXXXXX
(Note: replace the X's with your own unique App Key obtained from the 8th Wall Console.
Enable "Transparent Canvas"
Go to Settings -> Rendering
Make sure that "Transparent Canvas" is checked
Disable "Prefer WebGL 2.0"
Go to Settings -> Rendering
Make sure that "Prefer WebGL 2.0" is unchecked
Add XRController
NOTE: Only for SLAM and/or Image Target projects. FaceController and XrController cannot be used simultaneously.
The 8th Wall sample PlayCanvas projects are populated with an XRController game object. If you are starting with a blank project, download xrcontroller.js
from https://www.github.com/8thwall/web/tree/master/gettingstarted/playcanvas/scripts/ and attach to an Entity in your scene.
Options:
Option | Description |
---|---|
disableWorldTracking | If true, turn off SLAM tracking for efficiency. |
shadowmaterial | Material which you want to use as a transparent shadow receiver (e.g. for ground shadows). Typically this material will be used on a "ground" plane entity positioned at (0,0,0) |
Add FaceController
NOTE: Only for Face Effects projects. FaceController and XrController cannot be used simultaneously.
The 8th Wall sample PlayCanvas projects are populated with a FaceController game object. If you are starting with a blank project, download facecontroller.js
from https://www.github.com/8thwall/web/tree/master/gettingstarted/playcanvas/scripts/ and attach to an Entity in your scene.
Option | Description |
---|---|
headAnchor | The entity to anchor to the root of the head in world space. |
XR8.PlayCanvas.runXr( {pcCamera, pcApp}, [extraModules], config )
Description
Opens the camera and starts running XR World Tracking and/or Image Targets in a playcanvas scene.
Parameters
Parameter | Description |
---|---|
pcCamera | The playcanvas scene camera to drive with AR. |
pcApp | The playcanvas app, typically this.app . |
extraModules [Optional] | An optional array of extra pipeline modules to install. |
config [Optional] | Configuration parameters to pass to XR8.run() |
config
[Optional] is an object with the following properties:
Property | Type | Default | Description |
---|---|---|---|
webgl2 [Optional] | bool | false | If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1. |
ownRunLoop [Optional] | bool | false | If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only] |
cameraConfig: {direction} [Optional] | object | {direction: XR8.XrConfig.camera().BACK} |
Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT |
glContextConfig [Optional] | WebGLContextAttributes | null | The attributes to configure the WebGL canvas context. |
allowedDevices [Optional] | XR8.XrConfig.device() | XR8.XrConfig.device().MOBILE |
Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY , always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE . |
var xrcontroller = pc.createScript('xrcontroller')
// Optionally, world tracking can be disabled to increase efficiency when tracking image targets.
xrcontroller.attributes.add('disableWorldTracking', {type: 'boolean'})
xrcontroller.prototype.initialize = function() {
const disableWorldTracking = this.disableWorldTracking
// After XR has fully loaded, open the camera feed and start displaying AR.
const runOnLoad = ({pcCamera, pcApp}, extramodules) => () => {
XR8.xrController().configure({disableWorldTracking})
XR8.PlayCanvas.runXr({pcCamera, pcApp}, extramodules)
}
// Find the camera in the playcanvas scene, and tie it to the motion of the user's phone in the
// world.
const pcCamera = XRExtras.PlayCanvas.findOneCamera(this.entity)
// While XR is still loading, show some helpful things.
// Almost There: Detects whether the user's environment can support web ar, and if it doesn't,
// shows hints for how to view the experience.
// Loading: shows prompts for camera permission and hides the scene until it's ready for display.
// Runtime Error: If something unexpected goes wrong, display an error screen.
XRExtras.Loading.showLoading({onxrloaded: runOnLoad({pcCamera, pcApp: this.app}, [
// Optional modules that developers may wish to customize or theme.
XRExtras.AlmostThere.pipelineModule(), // Detects unsupported browsers and gives hints.
XRExtras.Loading.pipelineModule(), // Manages the loading screen on startup.
XRExtras.RuntimeError.pipelineModule(), // Shows an error image on runtime error.
])})
}
XR8.PlayCanvas.runFaceEffects( {pcCamera, pcApp}, [extraModules], config )
Description
Opens the camera and starts running XR World Tracking and/or Image Targets in a playcanvas scene.
Parameters
Parameter | Description |
---|---|
pcCamera | The playcanvas scene camera to drive with AR. |
pcApp | The playcanvas app, typically this.app . |
extraModules [Optional] | An optional array of extra pipeline modules to install. |
config [Optional] | Configuration parameters to pass to XR8.run() |
config
[Optional] is an object with the following properties:
Property | Type | Default | Description |
---|---|---|---|
webgl2 [Optional] | bool | false | If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1. |
ownRunLoop [Optional] | bool | false | If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only] |
cameraConfig: {direction} [Optional] | object | {direction: XR8.XrConfig.camera().BACK} |
Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT |
glContextConfig [Optional] | WebGLContextAttributes | null | The attributes to configure the WebGL canvas context. |
allowedDevices [Optional] | XR8.XrConfig.device() | XR8.XrConfig.device().MOBILE |
Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY , always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE . |
XR8.PlayCanvas.stopXr()
Description
Remove the modules added in runXr() and stop the camera.
Parameters
None.
XR8.PlayCanvas.stopFaceEffects()
Description
Remove the modules added in runFaceEffects() and stop the camera.
Parameters
None.
This section describes the events fired by 8th Wall in a PlayCanvas environment.
You can listen for these events in your web application.
Events Emitted
Event Emitted | Description |
---|---|
xr:camerastatuschange | This event is emitted when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status. |
xr:realityerror | This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed. |
xr:realityready | This event is emitted when 8th Wall Web has initialized and at least one frame has been successfully processed. This is the recommended time at which any loading elements should be hidden. |
xr:screenshoterror | This event is emitted in response to the screenshotrequest resulting in an error. |
XrController Events Emitted
Event Emitted | Description |
---|---|
xr:screenshotready | This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image of the AFrame canvas will be provided. |
xr:imageloading | This event is emitted when detection image loading begins. |
xr:imagescanning | This event is emitted when all detection images have been loaded and scanning has begun. |
xr:imagefound | This event is emitted when an image target is first found. |
xr:imageupdated | This event is emitted when an image target changes position, rotation or scale. |
xr:imagelost | This event is emitted when an image target is no longer being tracked. |
FaceController Events Emitted
Event Emitted | Description |
---|---|
xr:faceloading | Fires when loading begins for additional face AR resources. |
xr:facescanning | Fires when all face AR resources have been loaded and scanning has begun. |
xr:facefound | Fires when a face is first found. |
xr:faceupdated | Fires when a face is subsequently found. |
xr:facelost | Fires when a face is no longer being tracked. |
Description
This event is fired when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status.
const handleCameraStatusChange = function handleCameraStatusChange(detail) {
console.log('status change', detail.status);
switch (detail.status) {
case 'requesting':
// Do something
break;
case 'hasStream':
// Do something
break;
case 'failed':
this.app.fire('xr:realityerror');
break;
}
}
this.app.on('xr:camerastatuschange', handleCameraStatusChange, this)
Description
This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed.
this.app.on('xr:realityerror', ({error, isDeviceBrowserSupported, compatibility}) => {
if (detail.isDeviceBrowserSupported) {
// Browser is compatible. Print the exception for more information.
console.log(error)
return
}
// Browser is not compatible. Check the reasons why it may not be in `compatibility`
console.log(compatibility)
}, this)
Description
This event is fired when 8th Wall Web has initialized and at least one frame has been successfully processed.
this.app.on('xr:realityready', () => {
// Hide loading UI
}, this)
Description
This event is emitted in response to the xr:screenshotrequest resulting in an error.
this.app.on('xr:screenshoterror', (detail) => {
console.log(detail)
// Handle screenshot error.
}, this)
Description
This event is emitted in response to the xr:screenshotrequest event being being completed successfully. The JPEG compressed image of the PlayCanvas canvas will be provided.
this.app.on('xr:screenshotready', (event) => {
// screenshotPreview is an <img> HTML element
const image = document.getElementById('screenshotPreview')
image.src = 'data:image/jpeg;base64,' + event.detail
}, this)
Image target events can be listened to as this.app.on(event, handler, this)
.
xr:imageloading: Fires when detection image loading begins.
xr:imageloading : { imageTargets: {name, type, metadata} }
xr:imagescanning: Fires when all detection images have been loaded and scanning has begun.
xr:imagescanning : { imageTargets: {name, type, metadata, geometry} }
xr:imagefound: Fires when an image target is first found.
xr:imagefound : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
xr:imageupdated: Fires when an image target changes position, rotation or scale.
xr:imageupdated : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
xr:imagelost: Fires when an image target is no longer being tracked.
xr:imagelost : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
const showImage = (detail) => {
if (name != detail.name) { return }
const {rotation, position, scale} = detail
entity.setRotation(rotation.x, rotation.y, rotation.z, rotation.w)
entity.setPosition(position.x, position.y, position.z)
entity.setLocalScale(scale, scale, scale)
entity.enabled = true
}
const hideImage = (detail) => {
if (name != detail.name) { return }
entity.enabled = false
}
this.app.on('xr:imagefound', showImage, {})
this.app.on('xr:imageupdated', showImage, {})
this.app.on('xr:imagelost', hideImage, {})
Face Effects events can be listened to as this.app.on(event, handler, this)
.
xr:faceloading: Fires when loading begins for additional face AR resources.
xr:faceloading : {maxDetections, pointsPerDetection, indices, uvs}
xr:facescanning: Fires when all face AR resources have been loaded and scanning has begun.
xr:facescanning: {maxDetections, pointsPerDetection, indices, uvs}
xr:facefound: Fires when a face is first found.
xr:facefound : {id, transform, attachmentPoints, vertices, normals}
xr:faceupdated: Fires when a face is subsequently found.
xr:faceupdated : {id, transform, attachmentPoints, vertices, normals}
xr:facelost: Fires when a face is no longer being tracked.
xr:facelost : {id}
let mesh = null
// Fires when loading begins for additional face AR resources.
this.app.on('xr:faceloading', ({maxDetections, pointsPerDetection, indices, uvs}) => {
const node = new pc.GraphNode();
const material = this.material.resource;
mesh = pc.createMesh(
this.app.graphicsDevice,
new Array(pointsPerDetection * 3).fill(0.0), // setting filler vertex positions
{
uvs: uvs.map((uv) => [uv.u, uv.v]).flat(),
indices: indices.map((i) => [i.a, i.b, i.c]).flat()
}
);
const meshInstance = new pc.MeshInstance(node, mesh, material);
const model = new pc.Model();
model.graph = node;
model.meshInstances.push(meshInstance);
this.entity.model.model = model;
}, {})
// Fires when a face is subsequently found.
this.app.on('xr:faceupdated', ({id, transform, attachmentPoints, vertices, normals}) => {
const {position, rotation, scale, scaledDepth, scaledHeight, scaledWidth} = transform
this.entity.setPosition(position.x, position.y, position.z);
this.entity.setLocalScale(scale, scale, scale)
this.entity.setRotation(rotation.x, rotation.y, rotation.z, rotation.w)
// Set mesh vertices in local space
mesh.setPositions(vertices.map((vertexPos) => [vertexPos.x, vertexPos.y, vertexPos.z]).flat())
// Set vertex normals
mesh.setNormals(normals.map((normal) => [normal.x, normal.y, normal.z]).flat())
mesh.update()
}, {})
This section describes the events that are listened for by 8th Wall Web in a PlayCanvas environment.
You can fire these events in your web application to perform various actions:
Event Listener | Description |
---|---|
xr:hidecamerafeed | Hides the camera feed. Tracking does not stop. |
xr:recenter | Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter. |
xr:screenshotrequest | Emits a request to the engine to capture a screenshot of the PlayCanvas canvas. The engine will emit a xr:screenshotready event with the JPEG compressed image or xr:screenshoterror if an error has occured. |
xr:showcamerafeed | Shows the camera feed. |
xr:stopxr | Stop the current XR session. While stopped, the camera feed is stopped and device motion is not tracked. |
this.app.fire('xr:hidecamerafeed')
Parameters
None
Description
Hides the camera feed. Tracking does not stop.
this.app.fire('xr:hidecamerafeed')
this.app.fire('xr:recenter')
Description
Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.
Parameters
Parameter | Description |
---|---|
origin: {x, y, z} [Optional] | The location of the new origin. |
facing: {w, x, y, z} [Optional] | A quaternion representing direction the camera should face at the origin. |
/*jshint esversion: 6, asi: true, laxbreak: true*/
// taprecenter.js: Defines a playcanvas script that re-centers the AR scene when the screen is
// tapped.
var taprecenter = pc.createScript('taprecenter')
// Fire a 'recenter' event to move the camera back to its starting location in the scene.
taprecenter.prototype.initialize = function() {
this.app.touch.on(pc.EVENT_TOUCHSTART,
(event) => { if (event.touches.length !== 1) { return } this.app.fire('xr:recenter')})
}
this.app.fire('xr:screenshotrequest')
Parameters
None
Description
Emits a request to the engine to capture a screenshot of the PlayCanvas canvas. The engine will emit a xr:screenshotready event with the JPEG compressed image or xr:screenshoterror if an error has occured.
this.app.on('xr:screenshotready', (event) => {
// screenshotPreview is an <img> HTML element
const image = document.getElementById('screenshotPreview')
image.src = 'data:image/jpeg;base64,' + event.detail
}, this)
this.app.on('xr:screenshoterror', (detail) => {
console.log(detail)
// Handle screenshot error.
}, this)
this.app.fire('xr:screenshotrequest')
this.app.fire('xr:showcamerafeed')
Parameters
None
Description
Shows the camera feed.
this.app.fire('xr:showcamerafeed')
this.app.fire('xr:stopxr')
Parameters
None
Description
Stop the current XR session. While stopped, the camera feed is stopped and device motion is not tracked.
this.app.fire('xr:stopxr')
Amazon Sumerian lets you create VR, AR, and 3D applications quickly and easily. For more information on Sumerian, please see https://aws.amazon.com/sumerian/
Adding 8th Wall Web to Sumerian
Please refer to the following URL for a getting stared guide on using 8th Wall Web with Amazon Sumerian:
https://github.com/8thwall/web/tree/master/gettingstarted/xrsumerian
Functions
Function | Description |
---|---|
addXRWebSystem | Adds a custom Sumerian System using XrController to the provided Sumerian world. |
addFaceEffectsWebSystem | Adds a custom Sumerian System using FaceController to the provided Sumerian world. |
XR8.Sumerian.addXRWebSystem()
Description
Adds a custom Sumerian System to the provided Sumerian world. If the given world is already running (i.e. in a {World#STATE_RUNNING} state), this system will start itself. Otherwise, it will wait for the world to start before running. When starting, this system will attach to the camera in the scene, modify it's position, and render the camera feed to the background. The given Sumerian world must only contain one camera.
Parameters
Parameter | Description |
---|---|
world | The Sumerian world that corresponds to the loaded scene. |
config [Optional] | Configuration parameters to pass to XR8.run() |
config
[Optional] is an object with the following properties:
Property | Type | Default | Description |
---|---|---|---|
webgl2 [Optional] | bool | false | If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1. |
ownRunLoop [Optional] | bool | true | If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only] |
cameraConfig: {direction} [Optional] | object | {direction: XR8.XrConfig.camera().BACK} |
Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT |
glContextConfig [Optional] | WebGLContextAttributes | null | The attributes to configure the WebGL canvas context. |
allowedDevices [Optional] | XR8.XrConfig.device() | XR8.XrConfig.device().MOBILE |
Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY , always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE . |
window.XR8.Sumerian.addXRWebSystem(world)
XR8.Sumerian.addFaceEffectsWebSystem()
Description
Adds a custom Sumerian System to the provided Sumerian world. If the given world is already running (i.e. in a {World#STATE_RUNNING} state), this system will start itself. Otherwise, it will wait for the world to start before running. When starting, this system will attach to the camera in the scene, modify it's position, and render the camera feed to the background. The given Sumerian world must only contain one camera.
Parameters
Parameter | Description |
---|---|
world | The Sumerian world that corresponds to the loaded scene. |
config [Optional] | Configuration parameters to pass to XR8.run() |
config
[Optional] is an object with the following properties:
Property | Type | Default | Description |
---|---|---|---|
webgl2 [Optional] | bool | false | If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1. |
ownRunLoop [Optional] | bool | true | If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only] |
cameraConfig: {direction} [Optional] | object | {direction: XR8.XrConfig.camera().BACK} |
Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT |
glContextConfig [Optional] | WebGLContextAttributes | null | The attributes to configure the WebGL canvas context. |
allowedDevices [Optional] | XR8.XrConfig.device() | XR8.XrConfig.device().MOBILE |
Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY , always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE . |
window.XR8.Sumerian.addFaceEffectsWebSystem(world)
This section describes the events emitted when using 8th Wall Web with Amazon Sumerian
You can listen for these events in your web application call a function to handle the event.
Events Emitted
Event Emitted | Description |
---|---|
camerastatuschange | This event is emitted when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status. |
screenshoterror | This event is emitted in response to the screenshotrequest resulting in an error. |
screenshotready | This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image will be provided. |
xrerror | This event is emitted when an error has occured when initializing 8th Wall Web. |
xrready | This event is emitted when 8th Wall Web has initialized and at least one frame has been successfully processed. |
XrController Events Emitted
Event Emitted | Description |
---|---|
xrimageloading | This event is emitted when detection image loading begins. |
xrimagescanning | This event is emitted when all detection images have been loaded and scanning has begun. |
xrimagefound | This event is emitted when an image target is first found. |
xrimageupdated | This event is emitted when an image target changes position, rotation or scale. |
xrimagelost | This event is emitted when an image target is no longer being tracked. |
FaceController Events Emitted
Event Emitted | Description |
---|---|
xrfaceloading | Fires when loading begins for additional face AR resources. |
xrfacescanning | Fires when all face AR resources have been loaded and scanning has begun. |
xrfacefound | Fires when a face is first found. |
xrfaceupdated | Fires when a face is subsequently found. |
xrfacelost | Fires when a face is no longer being tracked. |
Description
This event is emitted when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status.
var handleCameraStatusChange = function handleCameraStatusChange(data) {
console.log('status change', data.status);
switch (data.status) {
case 'requesting':
// Do something
break;
case 'hasStream':
// Do something
break;
case 'failed':
// Do something
break;
}
};
window.sumerian.SystemBus.addListener('camerastatuschange', handleCameraStatusChange)
Description
This event is emitted in response to the screenshotrequest resulting in an error.
window.sumerian.SystemBus.addListener('screenshoterror', (data) => {
console.log(event.detail)
// Handle screenshot error.
})
Description
This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image of the Sumerian canvas will be provided.
window.sumerian.SystemBus.addListener('screenshotready', (data) => {
// screenshotPreview is an <img> HTML element
const image = document.getElementById('screenshotPreview')
image.src = 'data:image/jpeg;base64,' + data
})
Description
This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed.
window.sumerian.SystemBus.addListener('xrerror', (data) => {
if (XR8.XrDevice.isDeviceBrowserCompatible) {
// Browser is compatible. Print the exception for more information.
console.log(data.error)
return
}
// Browser is not compatible. Check the reasons why it may not be.
for (let reason of XR8.XrDevice.incompatibleReasons()) {
// Handle each XR8.XrDevice.IncompatibleReason
}
})
Description
This event is emitted when 8th Wall Web has initialized and at least one frame has been successfully processed. This is the recommended time at which any loading elements should be hidden.
window.sumerian.SystemBus.addListener('xrready', () => {
// Hide loading UI
})
Description
This event is emitted when detection image loading begins.
imageloading.detail : { imageTargets: {name, type, metadata} }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' . |
metadata | User metadata. |
Description
This event is emitted when all detection images have been loaded and scanning has begun.
imagescanning.detail : { imageTargets: {name, type, metadata, geometry} }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' . |
metadata | User metadata. |
geometry | Object containing geometry data. If type=FLAT: {scaledWidth, scaledHeight} , lse if type=CYLINDRICAL or type=CONICAL: {height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians} |
If type = FLAT
, geometry:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
, geometry:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
Description
This event is emitted when an image target is first found.
imagefound.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
Description
This event is emitted when an image target changes position, rotation or scale.
imageupdated.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
Description
This event is emitted when an image target is no longer being tracked.
imagelost.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
Description
Fires when loading begins for additional face AR resources.
xrfaceloading : {maxDetections, pointsPerDetection, indices, uvs}
window.sumerian.SystemBus.addListener(
'xrfaceloading',
({maxDetections, pointsPerDetection, indices, uvs}) => {
})
Description
Fires when all face AR resources have been loaded and scanning has begun.
xrfacescanning : {maxDetections, pointsPerDetection, indices, uvs}
window.sumerian.SystemBus.addListener(
'xrfacescanning',
({maxDetections, pointsPerDetection, indices, uvs}) => {
})
Description
Fires when a face first found.
xrfacefound : {id, transform, attachmentPoints, vertices, normals}
window.sumerian.SystemBus.addListener(
'xrfacefound',
({id, transform, attachmentPoints, vertices, normals}) => {
})
Description
Fires when a face is subsequently found.
xrfaceupdated : {id, transform, attachmentPoints, vertices, normals}
window.sumerian.SystemBus.addListener(
'xrfaceupdated',
({id, transform, attachmentPoints, vertices, normals}) => {
})
Description
Fires when a face is subsequently found.
xrfacelost : {id}
window.sumerian.SystemBus.addListener(
'xrfacelost',
({id}) => {
})
This section describes the events that are listened for by the Sumerian module in 8th Wall Web.
You can emit these events in your web application to perform various actions:
Event Listener | Description |
---|---|
recenter | Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter. |
screenshotrequest | Emits a request to the engine to capture a screenshot of the Sumerian canvas. The engine will emit a screenshotready event with the JPEG compressed image or screenshoterror if an error has occured. |
window.sumerian.SystemBus.emit('recenter', {origin, facing})
Description
Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.
Parameters
Parameter | Description |
---|---|
origin: {x, y, z} [Optional] | The location of the new origin. |
facing: {w, x, y, z} [Optional] | A quaternion representing direction the camera should face at the origin. |
window.sumerian.SystemBus.emit('recenter')
// OR
window.sumerian.SystemBus.emit('recenter', {
origin: { x: 1, y: 4, z: 0 },
facing: { w: 0.9856, x: 0, y: 0.169, z: 0 }
})
window.sumerian.SystemBus.emit('screenshotrequest')
Parameters
None
Description
Emits a request to the engine to capture a screenshot of the Sumerian canvas. The engine will emit a screenshotready event with the JPEG compressed image or screenshoterror if an error has occured.
const photoButton = document.getElementById('photoButton')
// Emit screenshotrequest when user taps
photoButton.addEventListener('click', () => {
image.src = ""
window.sumerian.SystemBus.emit('screenshotrequest')
})
window.sumerian.SystemBus.addListener('screenshotready', event => {
image.src = 'data:image/jpeg;base64,' + event.detail
})
window.sumerian.SystemBus.addListener('screenshoterror', event => {
console.log("error")
})
Description
Provides a camera pipeline module that drives three.js camera to do virtual overlays.
Functions
Function | Description |
---|---|
pipelineModule | A pipeline module that interfaces with the threejs environment and lifecyle. |
xrScene | Get a handle to the xr scene, camera and renderer. |
XR8.Threejs.pipelineModule()
Description
A pipeline module that interfaces with the threejs environment and lifecyle. The threejs scene can be queried using Threejs.xrScene() after Threejs.pipelineModule()'s onStart method is called. Setup can be done in another pipeline module's onStart method by referring to Threejs.xrScene() as long as XR8.addCameraPipelineModule is called on the second module after calling XR8.addCameraPipelineModule(Threejs.pipelineModule())
.
Note that this module does not actually draw the camera feed to the canvas, GlTextureRenderer does that. To add a camera feed in the background, install the GlTextureRenderer.pipelineModule() before installing this module (so that it is rendered before the scene is drawn).
Parameters
None
Returns
A Threejs pipeline module that can be added via XR8.addCameraPipelineModule().
// Add XrController.pipelineModule(), which enables 6DoF camera motion estimation.
XR8.addCameraPipelineModule(XR8.XrController.pipelineModule())
// Add a GlTextureRenderer which draws the camera feed to the canvas.
XR8.addCameraPipelineModule(XR8.GlTextureRenderer.pipelineModule())
// Add Threejs.pipelineModule() which creates a threejs scene, camera, and renderer, and
// drives the scene camera based on 6DoF camera motion.
XR8.addCameraPipelineModule(XR8.Threejs.pipelineModule())
// Add custom logic to the camera loop. This is done with camera pipeline modules that provide
// logic for key lifecycle moments for processing each camera frame. In this case, we'll be
// adding onStart logic for scene initialization, and onUpdate logic for scene updates.
XR8.addCameraPipelineModule({
// Camera pipeline modules need a name. It can be whatever you want but must be unique
// within your app.
name: 'myawesomeapp',
// onStart is called once when the camera feed begins. In this case, we need to wait for the
// XR8.Threejs scene to be ready before we can access it to add content.
onStart: ({canvasWidth, canvasHeight}) => {
// Get the 3js scene. This was created by XR8.Threejs.pipelineModule().onStart(). The
// reason we can access it here now is because 'myawesomeapp' was installed after
// XR8.Threejs.pipelineModule().
const {scene, camera} = XR8.Threejs.xrScene()
// Add some objects to the scene and set the starting camera position.
myInitXrScene({scene, camera})
// Sync the xr controller's 6DoF position and camera paremeters with our scene.
XR8.XrController.updateCameraProjectionMatrix({
origin: camera.position,
facing: camera.quaternion,
})
},
// onUpdate is called once per camera loop prior to render. Any 3js geometry scene would
// typically happen here.
onUpdate: () => {
// Update the position of objects in the scene, etc.
updateScene(XR8.Threejs.xrScene())
},
})
XR8.Threejs.xrScene()
Description
Get a handle to the xr scene, camera and renderer.
Parameters
None
Returns
An object: { scene, camera, renderer }
Property | Description |
---|---|
scene | The Threejs scene. |
camera | The Threejs main camera. |
renderer | The Threejs renderer. |
const {scene, camera, renderer} = XR8.Threejs.xrScene()
Enumeration
Description
Desired camera to use.
Properties
Property | Value | Description |
---|---|---|
FRONT | front | Use the front facing / selfie camera. |
BACK | back | Use the rear facing / back camera. |
Enumeration
Description
Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY
, always open the camera.
Note: World tracking can only be used with XR8.XrConfig.device().MOBILE
.
Properties
Property | Value | Description |
---|---|---|
MOBILE | mobile | Restrict camera pipeline on mobile-class devices, for example phones and tablets. |
ANY | any | Start running camera pipeline without checking device capabilities. This may fail at some point in the pipeline startup if a required sensor is not available at run time (for example, a laptop has no camera). |
Description
XrController provides 6DoF camera tracking and interfaces for configuring tracking.
Functions
Function | Description |
---|---|
configure | Configures what processing is performed by XrController (may have performance implications). |
hitTest | Estimate the 3D position of a point on the camera feed. |
pipelineModule | Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started, camera proessing events, and other state changes. These are used to calculate the camera's position. |
recenter | Repositions the camera to the origin / facing direction specified by updateCameraProjectionMatrix and restart tracking. |
updateCameraProjectionMatrix | Reset the scene's display geometry and the camera's starting position in the scene. The display geometry is needed to properly overlay the position of objects in the virtual scene on top of their corresponding position in the camera image. The starting position specifies where the camera will be placed and facing at the start of a session. |
XrController.configure({ enableWorldPoints, enableLighting, disableWorldTracking, imageTargets: [] })
Description
Configures the processing performed by XrController (may have performance implications).
Parameters
Parameter | Description |
---|---|
enableLighting [Optional] | If true, lighting will be provided by XrController.pipelineModule() as processCpuResult.reality.lighting |
enableWorldPoints [Optional] | If true, worldPoints will be provided by XrController.pipelineModule() as processCpuResult.reality.worldPoints . |
disableWorldTracking [Optional] | If true, turn off SLAM tracking for efficiency. This needs to be done BEFORE XR8.Run() is called. |
imageTargets [Optional] | List of names of the image target to detect. Can be modified at runtime. Note: All currently active image targets will be replaced with the ones specified in this list. |
leftHandedAxes [Optional] | If true, use left-handed coordinates. Default is false |
mirroredDisplay [Optional] | If true, flip left and right in the output. |
IMPORTANT: disableWorldTracking: true
needs to be set BEFORE both XR8.XrController.pipelineModule() and XR8.Run() are called.
XR8.XrController.configure({ enableLighting: true, enableWorldPoints: true, disableWorldTracking: false })
// Disable world tracking (SLAM)
XR8.XrController.configure({disableWorldTracking: true})
// Open the camera and start running the camera run loop
// In index.html: <canvas id="camerafeed"></canvas>
XR8.run({canvas: document.getElementById('camerafeed')})
XR8.XrController.configure({imageTargets: ['image-target1', 'image-target2', 'image-target3']})
XrController.hitTest(X, Y, includedTypes = [])
Description
Estimate the 3D position of a point on the camera feed. X and Y are specified as numbers between 0 and 1, where (0, 0) is the upper left corner and (1, 1) is the lower right corner of the camera feed as rendered in the camera that was specified by updateCameraProjectionMatrix. Mutltiple 3d position esitmates may be returned for a single hit test based on the source of data being used to estimate the position. The data source that was used to estimate the position is indicated by the hitTest.type.
Parameters
Parameter | Description |
---|---|
X | Value between 0 and 1 that represents the horizontal position on camera feed from left to right. |
Y | Value between 0 and 1 that represents the vertical position on camera feed from top to bottom. |
includedTypes | List of one or more of: 'FEATURE_POINT' , 'ESTIMATED_SURFACE' or 'DETECTED_SURFACE' . Note: Currently only 'FEATURE_POINT' is supported. |
Returns
An array of estimated 3D positions from the hit test:
[{ type, position, rotation, distance }]
Property | Description |
---|---|
type | One of 'FEATURE_POINT' , 'ESTIMATED_SURFACE' , 'DETECTED_SURFACE' , or 'UNSPECIFIED' |
position: {x, y, z} |
The estimated 3D position of the queried point on the camera feed. |
rotation: {x, y, z, w} |
The estimated 3D rotation of the queried point on the camera feed. |
distance | The estimated distance from the device of the queried point on the camera feed. |
const hitTestHandler = (e) => {
const x = e.touches[0].clientX / window.innerWidth
const y = e.touches[0].clientY / window.innerHeight
const hitTestResults = XR8.XrController.hitTest(x, y, ['FEATURE_POINT'])
}
XR8.XrController.pipelineModule()
Parameters
None
Description
Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started, camera proessing events, and other state changes. These are used to calculate the camera's position.
Returns
Return value is an object made available to onUpdate as:
processCpuResult.reality: { rotation, position, intrinsics, trackingStatus, trackingReason, worldPoints, realityTexture, lighting }
Property | Description |
---|---|
rotation: {w, x, y, z} |
The orientation (quaternion) of the camera in the scene. |
position: {x, y, z} |
The position of the camera in the scene. |
intrinsics | A column-major 4x4 projection matrix that gives the scene camera the same field of view as the rendered camera feed. |
trackingStatus | One of 'UNSPECIFIED' , 'NOT_AVAILABLE' , 'LIMITED' or 'NORMAL' . |
trackingReason | One of 'UNSPECIFIED' , 'INITIALIZING' , 'RELOCALIZING' , 'TOO_MUCH_MOTION' or 'NOT_ENOUGH_TEXTURE' . |
worldPoints: [{id, confidence, position: {x, y, z}}] |
An array of detected points in the world at their location in the scene. Only filled if XrController is configured to return world points and trackingReason != INITIALIZING. |
realityTexture | The WebGLTexture containing camera feed data. |
lighting: {exposure, temperature} |
Exposure of the lighting in your environment. Note: temperature has not yet been implemented. |
Dispatched Events
imageloading: Fires when detection image loading begins.
imageloading.detail : { imageTargets: {name, type, metadata} }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' . |
metadata | User metadata. |
imagescanning: Fires when all detection images have been loaded and scanning has begun.
imagescanning.detail : { imageTargets: {name, type, metadata, geometry} }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' . |
metadata | User metadata. |
geometry | Object containing geometry data. If type=FLAT: {scaledWidth, scaledHeight} , lse if type=CYLINDRICAL or type=CONICAL: {height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians} |
If type = FLAT
, geometry:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
, geometry:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
imagefound: Fires when an image target is first found.
imagefound.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
imageupdated: Fires when an image target changes position, rotation or scale.
imageupdated.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
imagelost: Fires when an image target is no longer being tracked.
imagelost.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
XR8.addCameraPipelineModule(XR8.XrController.pipelineModule())
const logEvent = ({name, detail}) => {
console.log(`Handling event ${name}, got detail, ${JSON.stringify(detail)}`)
}
XR8.addCameraPipelineModule({
name: 'eventlogger',
listeners: [
{event: 'reality.imageloading', process: logEvent },
{event: 'reality.imagescanning', process: logEvent },
{event: 'reality.imagefound', process: logEvent},
{event: 'reality.imageupdated', process: logEvent},
{event: 'reality.imagelost', process: logEvent},
],
})
XR8.XrController.recenter()
Parameters
None
Description
Repositions the camera to the origin / facing direction specified by updateCameraProjectionMatrix and restart tracking.
XR8.XrController.updateCameraProjectionMatrix({ cam, origin, facing })
Description
Reset the scene's display geometry and the camera's starting position in the scene. The display geometry is needed to properly overlay the position of objects in the virtual scene on top of their corresponding position in the camera image. The starting position specifies where the camera will be placed and facing at the start of a session.
Parameters
Parameter | Description |
---|---|
cam [Optional] | { pixelRectWidth, pixelRectHeight, nearClipPlane, farClipPlane } |
origin: { x, y, z } [Optional] |
The starting position of the camera in the scene. |
facing: { w, x, y, z } [Optional] |
The starting direction (quaternion) of the camera in the scene. |
cam
has the following parameters:
Parameter | Description |
---|---|
pixelRectWidth | The width of the canvas that displays the camera feed. |
pixelRectHeight | The height of the canvas that displays the camera feed. |
nearClipPlane | The closest distance to the camera at which scene objects are visible. |
farClipPlane | The farthest distance to the camera at which scene objects are visible. |
XR8.XrController.updateCameraProjectionMatrix({ origin: { x: 1, y: 4, z: 0 }, facing: { w: 0.9856, x: 0, y: 0.169, z: 0 } })
Description
Provides information about device compatibility and characteristics.
Properties
Property | Type | Description |
---|---|---|
IncompatibilityReasons | Enum | The possible reasons for why a device and browser may not be compatible with 8th Wall Web. |
Functions
Function | Description |
---|---|
deviceEstimate | Returns an estimate of the user's device (e.g. make / model) based on user agent string and other factors. This information is only an estimate, and should not be assumed to be complete or reliable. |
incompatibleReasons | Returns an array of XrDevice.IncompatibilityReasons why the device the device and browser are not supported. This will only contain entries if XR8.XrDevice.isDeviceBrowserCompatible() returns false. |
incompatibleReasonDetails | Returns extra details about the reasons why the device and browser are incompatible. This information should only be used as a hint to help with further error handling. These should not be assumed to be complete or reliable. This will only contain entries if XR8.XrDevice.isDeviceBrowserCompatible() returns false. |
isDeviceBrowserCompatible | Returns an estimate of whether the user's device and browser is compatible with 8th Wall Web. If this returns false, XrDevice.incompatibleReasons() will return reasons about why the device and browser are not supported. |
Enumeration
Description
The possible reasons for why a device and browser may not be compatible with 8th Wall Web.
Properties
Property | Value | Description |
---|---|---|
UNSPECIFIED | 0 | The incompatible reason is not specified. |
UNSUPPORTED_OS | 1 | The estimated operating system is not supported. |
UNSUPPORTED_BROWSER | 2 | The estimated browser is not supported. |
MISSING_DEVICE_ORIENTATION | 3 | The browser does not support device orientation events. |
MISSING_USER_MEDIA | 4 | The browser does not support user media acccess. |
MISSING_WEB_ASSEMBLY | 5 | The browser does not support web assembly. |
XR8.XrDevice.deviceEstimate()
Description
Returns an estimate of the user's device (e.g. make / model) based on user agent string and other factors. This information is only an estimate, and should not be assumed to be complete or reliable.
Parameters
None
Returns
An object: { locale, os, osVersion, manufacturer, model }
Property | Description |
---|---|
locale | The user's locale. |
os | The device's operating system. |
osVersion | The device's operating system version. |
manufacturer | The device's manufacturer. |
model | The device's model. |
XR8.XrDevice.incompatibleReasons({ allowedDevices })
Description
Returns an array of XR8.XrDevice.IncompatibilityReasons why the device the device and browser are not supported. This will only contain entries if XR8.XrDevice.isDeviceBrowserCompatible() returns false.
Parameters
Parameter | Description |
---|---|
allowedDevices [Optional] | Supported device classes, a value in XR8.XrConfig.device(). |
Returns
Returns an array of XrDevice.IncompatibleReasons
const reasons = XR8.XrDevice.incompatibleReasons()
for (let reason of reasons) {
switch (reason) {
case XR8.XrDevice.IncompabilityReasons.UNSUPPORTED_OS:
// Handle unsupported os error messaging.
break;
case XR8.XrDevice.IncompabilityReasons.UNSUPPORTED_BROWSER:
// Handle unsupported browser
break;
...
}
XR8.XrDevice.incompatibleReasonDetails({ allowedDevices })
Description
Returns extra details about the reasons why the device and browser are incompatible. This information should only be used as a hint to help with further error handling. These should not be assumed to be complete or reliable. This will only contain entries if XrDevice.isDeviceBrowserCompatible() returns false.
Parameters
Parameter | Description |
---|---|
allowedDevices [Optional] | Supported device classes, a value in XR8.XrConfig.device(). |
Returns
An object: { inAppBrowser, inAppBrowserType }
Property | Description |
---|---|
inAppBrowser | The name of the in-app browser detected (e.g. 'Twitter' ) |
inAppBrowserType | A string that helps describe how to handle the in-app browser. |
XR8.XrDevice.isDeviceBrowserCompatible({ allowedDevices })
Description
Returns an estimate of whether the user's device and browser is compatible with 8th Wall Web. If this returns false, XrDevice.incompatibleReasons() will return reasons about why the device and browser are not supported.
Parameters
Parameter | Description |
---|---|
allowedDevices [Optional] | Supported device classes, a value in XR8.XrConfig.device(). |
Returns
True or false.
XR8.XrDevice.isDeviceBrowserCompatible({allowedDevices: XR8.XrConfig.device().MOBILE})
Description
Utilities for specifying permissions required by a pipeline module.
Modules can indicate what browser capabilities they require that may need permissions requests. These can be used by the framework to request appropriate permissions if absent, or to create components that request the appropriate permissions before running XR.
Properties
Property | Type | Description |
---|---|---|
permissions() | Enum | List of permissions that can be specified as required by a pipeline module. |
XR8.addCameraPipelineModule({
name: 'request-gyro',
requiredPermissions: () => ([XR8.XrPermissions.permissions().DEVICE_ORIENTATION]),
})
Enumeration
Description
Permissions that can be required by a pipeline module.
Properties
Property | Value | Description |
---|---|---|
CAMERA | camera | Require camera. |
DEVICE_MOTION | devicemotion | Require accelerometer. |
DEVICE_ORIENTATION | deviceorientation | Require gyro. |
MICROPHONE | microphone | Require microphone. |
XR8.addCameraPipelineModule({
name: 'request-gyro',
requiredPermissions: () => ([XR8.XrPermissions.permissions().DEVICE_ORIENTATION]),
})