8th Wall enables developers to create, collaborate and publish WebAR experiences that run directly in a web browser.
Built entirely using standards-compliant JavaScript and WebGL, 8th Wall Web is a complete implementation of 8th Wall's Simultaneous Localization and Mapping (SLAM) engine, hyper-optimized for real-time Web AR on browsers. Features include World Tracking, Image Targets, and Face Effects.
The 8th Wall Cloud Editor allows you to develop fully featured WebAR projects and collaborate with team members in real time. Built-In Hosting allows you to publish projects to multiple deployment states hosted on 8th Wall's reliable and secure global network, including a password-protected staging environment. Self-Hosting is also available for certain plans.
8th Wall is easily integrated into 3D JavaScript frameworks such as:
To develop app-based AR with Unity, use Niantic Lightship ARDK.
8th Wall Release 21.2 is now available! This release provides a number of updates and enhancements:
Release 21.2: (2022-December-16, v21.2.2.997 / 2022-December-13, v21.2.1.997)
New Features:
Introducing Sky Effects - a major update to the 8th Wall Engine enabling sky segmentation:
Fixes and Enhancements:
XRExtras Enhancements:
Click Here for to see a full list of changes.
Mobile browsers require the following functionality to support 8th Wall Web experiences:
NOTE: 8th Wall Web experiences must be viewed via https. This is required by browsers for camera access.
This translates to the following compatibility for iOS and Android devices:
iOS:
Apps that use SFSafariViewController web views (iOS 13+)
Apps/Browsers that use WKWebView web views (iOS 14.3+)
Examples:
Android:
Browsers known to natively support the features required for WebAR:
Apps using Web Views known to support the features required for WebAR:
Link-out support
For apps that don’t natively support the features required for WebAR, our XRExtras library provides flows to direct users to the right place, maximizing accessibility of your WebAR projects from these apps.
Examples: TikTok, Facebook (Android), Facebook Messenger (Android), Instagram (Android)
Screenshots:
Launch Browser from Menu (iOS) | Launch Browser from Button (Android) | Copy Link to Clipboard |
---|---|---|
![]() |
![]() |
![]() |
8th Wall Web is easily integrated into 3D JavaScript frameworks such as:
Platform | Lighting Estimation | AR Background | Camera Motion | Horizontal Surfaces | Vertical Surfaces | Image Detection & Tracking | World Points | Hit Tests | Face Effects | Sky Effects |
---|---|---|---|---|---|---|---|---|---|---|
8th Wall Web | Yes | Yes | 6 DoF | Yes, single surface | No | Yes | Yes | Yes | Yes | Yes |
This guide provides all of the steps required to get you up and running with the 8th Wall Cloud Editor and Built-in Hosting platform.
Creating an 8th Wall Account gives you the ability to:
New Users: Sign up for a 14-day free trial at https://www.8thwall.com/try-free-trial
Existing Users: Login at https://www.8thwall.com/login using your email address and password.
The 8th Wall Cloud Editor and Built-in Hosting platform are available to workspaces with a paid subscription. 8th Wall offers a 14-day free trial so you can get access to the full power of 8th Wall and begin building WebAR experiences.
At the end of your 14-day free trial, your account will automatically upgrade to a paid plan. You must cancel your free trial before the end of the trial period to avoid charges. 8th Wall subscriptions automatically renew until you cancel. There are no refunds or credits for partial or unused months. To manage your subscription settings, please see https://www.8thwall.com/docs/web/#account-settings
From the 8th Wall Homepage or Pricing page, click Start Free Trial
Create your account by entering your Name, Email and Password. Review and confirm: Accept the 8th Wall Terms and Conditions and then click Next.
IMPORTANT! At the end of the 14-day free trial period, your account will automatically upgrade to the selected paid plan. You must cancel the free trial before the end of the trial period to avoid charges. There are no refunds or credits for partial or unused months. To manage your subscription settings, please see https://www.8thwall.com/docs/web/#account-settings
The free trial screen will display the date your trial ends and you will be automatically charged, if you don't cancel:
Enter a Workspace Name. This value is for display purposes only and doesn't impact any URLs associated with your workspace.
Enter a Workspace URL. Pick something relevant for your workspace name, such as the name of your company.
IMPORTANT: This value will be used as the default sub-domain for ALL 8th Wall hosted projects in your account (e.g. mycompany.8thwall.app/project-name). This value will also be used in your Public Profile page URL (e.g. www.8thwall.com/mycompany).
You cannot change this value later, so choose wisely!
Note: if you want to connect custom domains to your 8th Wall hosted projects to override the default URL, please see here.
Select Hosting Type (Pro/Enterprise plans only): Decide up front if the project will be hosted by 8th Wall and developed using the 8th Wall Cloud Editor, or if you'll be self-hosting. This setting cannot be changed later. Self-hosting is only avilable to paid Pro/Enterprise workspaces. Self-hosting is not available to workspaces on Starter or Plus plans, or workspaces on the Pro plan during the free trial period.
Select a Project Name: The project name is used both in th default project URL (e.g. mycompany.8thwall.app/project-name
) as well as the Featured Project page URL (e.g. www.8thwall.com/mycompany/project-name
). It cannot be changed later.
Select a License Type (Pro/Enterprise only)
License Types:
Commercial: Commercial projects are intended for commercial use. When you’re ready to launch a commercial project publicly, you will need to purchase a monthly Commercial License which varies based on views. NOTE: Commercial projects cannot be purchased during a free trial. If you need to purchase a commercial license, you can end your free trial early and begin your paid subscription.
Demo Use: You may create unlimited demo projects which are publicly viewable and strictly intended for pitching prospective work. A "Demo Use Only" label will appear on the loading screen. If you decide to commercialize your project at any point, switch to "Commercial" in the Project Dashboard.
Web App: You may create unlimited first-party projects under this license. Web app projects require the splash screen to remain on and will be publicly viewable on 8thwall.com as soon as you publish. This license does not permit projects created for work-for-hire as they require a Commercial license.
On the following screen, a project README will be displayed. [Optional] To test out the template before cloning, click the Launch button and scan the QR code with your phone.
Click the Clone Project button to proceed. The sample project will be cloned into your workspace, and the Cloud Editor will be opened.
At the top of the Cloud Editor window, click the Preview button.
Scan the QR code with your mobile device to open a web browser and look at a live preview of the WebAR project.
Note: The "Preview" QR code displayed within the Cloud Editor is a temporary, one-time use QR code only meant for use by the developer while actively working in the Cloud Editor. This QR code takes you to a private, development URL, and isn't accessible by others. To share your work with others, please see the section below on Publishing your project.
When the WebAR preview loads, tap on the surface in front of you to spawn 3D models.
Result:
At this point, you have a fully operational Web AR project and have previewed it on a mobile device. Next, make a very small code change to illustrate how to update the project, preview the new code, and land the changes into source control.
Within body.html
of the Cloud Editor project, make a small text change to the promptText
. For example, simply change the text from Tap To Place Model
to Tap To Begin
.
Click Save + Build
to save your work and initiate a new cloud build of your project.
If your mobile browser is still open from scanning the Preview QR code in Step 2, your phone will automatically reload once the build completes. If the mobile browser page is no longer open, scan the Preview QR code again to preview your updates to the project.
Once satisfied with your changes, land the updated code into the Cloud Editor's integrated source control. At the top-right of the Cloud Editor window, click Land. The button will be green, indicating that there are changes in the project that have not yet been landed into source control:
The final step is to publish your updated and landed project code using 8th Wall's Built-in Hosting. This allows the project to be viewed publicly by anyone on the internet.
Note: Commercial projects require additional commercial licenses when launched. See https://www.8thwall.com/pricing for more info.
Go back to the Project Dashboard in the left nagivation. In the QR 8.code section, the Public project URL will be displayed along with both an 8th.io shortlink and associated QR code.
Scan the QR code with your mobile device to view the Public Web AR experience.
8th Wall has created a number of sample projects that you can clone and use as starting points to help you get started. Please check out:
Cloud Editor & 8th Wall Hosted examples:
Self-Hosted examples:
8th Wall enables developers to create, collaborate and publish Web AR experiences that run directly in a web browser.
Built entirely using standards-compliant JavaScript and WebGL, 8th Wall Web is a complete implementation of 8th Wall's Simultaneous Localization and Mapping (SLAM) engine, hyper-optimized for real-time Web AR on browsers. Features include World Tracking, Image Targets, and Face Effects.
The 8th Wall Cloud Editor allows you to develop fully featured Web AR projects and collaborate with team members in real time. Built-In Hosting allows you to publish projects to multiple deployment states hosted on 8th Wall's reliable and secure global network, including a password-protected staging environment. Self-Hosting is also available for certain plans.
Creating an 8th Wall Account gives you the ability to:
New Users: Sign up for a 14-day free trial at https://www.8thwall.com/try-free-trial and please follow the Quick Start Guide to get started!
Existing Users: Login at https://www.8thwall.com/login using your email address and password.
The 8th Wall homepage, when logged in, provides access to all of your workspaces and recent projects. Select a Workspace or Project to access its dashboard.
Homepage guide:
A Workspace is a logical grouping of Projects, Users, and Billing. Workspaces can contain one or more Users, each with different permissions. Users can belong to multiple Workspaces.
The Workspace dashboard allows you to:
When creating a new 8th Wall account directly from 8thwall.com, you will start with a workspace with a 14-day free trial.
If signing up via an invitation from another 8th Wall user, you will be added as a team member of their existing workspace.
To select a workspace, perform one of the following:
Each Workspace has a team containing one or more Users, each with different permissions. Users can belong to multiple Workspace teams.
Add others to your team to allow them to access the Projects in your workspace. This allows you to collaboratively create, manage, test and publish Web AR projects as a team.
Team members can have one of three roles:
Capabilities for each role:
Capability | OWNER | ADMIN | DEV |
---|---|---|---|
Projects - View | X | X | X |
Projects - Create | X | X | X |
Projects - Edit | X | X | X |
Projects - Delete | X | X | X |
Authorize Devices | X | X | X |
Teams - View Users | X | X | X |
Teams - Invite Users | X | X | |
Teams - Remove Users | X | X | |
Teams - Manage User Roles | X | X | |
Workspaces - Create | X | X | X |
Workspaces - Edit | X | ||
Workspaces - Manage Plans | X | ||
Edit Profile | X | X | X |
Each user in your workspace has a handle. Workspace handles will be the same as the User Handle defined in a user's profile unless already taken or customized by a user.
Handles are used as part of the URL (in the format "handle-client-workspace.dev.8thwall.app") to preview new changes when developing with the 8th Wall Cloud Editor.
Example: tony-default-mycompany.dev.8thwall.app
Important
Modify User Handle
This section describes how to activate your Public Profile and feature projects on your page.
Your Public Profile is your own page on 8thwall.com where you can showcase your work, demo your experience and even share your code if you so choose.
All Pro and Enterprise workspaces are provided with a Public Profile on 8thwall.com but these pages need to be activated in order to be accessible. Only Owner and Admin user roles have permission to make changes and activate your Public Profile. You can deactivate your Public Profile at any time. Your Public Profile must be activated first in order to publish Featured Projects to your page.
To activate your Public Profile:
Complete all mandatory fields on the Page Info form including:
Click Save to save the information you have entered.
Click Activate Public Profile when you are ready for your Public Profile page to be active.
You can deactivate your Public Profile at any time. Your Public Profile and any Featured Project pages will no longer be visible once this page is deactivated. This will not impact any WebAR experiences, only the public profile pages. Only Owner and Admin users have permission to deactivate your Public Profile.
To deactivate your Public Profile:
Click on the Deactivate Public Profile link at the bottom of the page.
Confirm you wish to deactivate the page by typing in the word "DEACTIVATE" in all caps and click Confirm.
You will receive a confirmation message confirming your Public Profile has been deactivated.
Once your Public Profile is activated, you can publish Featured Projects pages to your Public Profile page. All 8th Wall projects can be featured on a public profile including non-commercial, demo, education and commercial projects (both with active and completed licenses). All workspaces users have permission to create, save and publish featured projects. Projects can be added or removed to a Public Profile at any time. Publishing a Featured Project does not impact the live published WebAR experience.
To publish a Featured Project on your Public Profile:
Basic Information
If Basic Information of your project is missing (Project Title, Project Description or Cover Image), you will be asked to update these details on the Project Settings page. Updating Basic Information will not impact your project code.
Project Details
Overview: Enter information about the project you are featuring in the Overview area in the Project Details section. Describe your project, project goals, and details about its development and design.
Add Formatting with the rich media buttons:
Tags: Enter or select up to five tags for your Featured Project. Your featured project must have at least one tag to be published. To add a tag, start typing. Use a comma or hit return to register them. Use the backspace to delete a tag. Click on suggested tags below to add them to your list of tags.
Media
Image Gallery: Upload images and GIFs by either dragging and dropping the files into the Image Gallery area under the Media section or by clicking on this area to select files from your device. Note:
Publish
Publish: Once you have completed all of the manatory fields and are ready to add this Featured Project page to your Public Profile, click the Publish button.
Save Draft: If you aren't quite ready to publish the Featured Project page to your Public Profile, but want to save your progress so you can leave the page and come back later, click the Save Draft button.
Projects built using the 8th Wall Cloud Editor and that have published a commit to the project's Public URL will have access to additional (and optional) 8th Wall-Hosted Features. For all other Featured Projects, this area will remain locked. These features are optional and are not required to publish a Featured Project page to your Public Profile.
Optional Featured Project Page settings for 8th Wall-Hosted projects include:
To enable either of this options:
Launch:
The "Launch" button, if enabled:
Cloneable Code:
The Cloneable Code feature, if enabled:
Click Publish or Save and Update.
Click Confirm to continue with your selection or click Cancel to undo your changes.
You can unpublish any of the Featured Project pages in Public Profile at any time. Unpublishing a Featured Project page will remove it from your Public Profile and it will no longer be publicly visible.
Note: This only unpublishes the Featured Project page. The WebAR experience itself is not taken down or imacted in any way.
To unpublish a Featured Project page from your Public Profile:
Click on the Unpublish Featured Project link at the bottom of the page.
Confirm you wish to unpublish this Featured Project by typing in the word "UNPUBLISH" in all caps and then click Confirm.
You will receive a confirmation message confirming your Featured Project has been unpublished. You will no longer see the Featured Project on your Public Profile.
The Account page allows you to:
NOTE: At the end of the 14-day free trial period, your account will automatically upgrade to a paid plan. You must cancel online before the end of the trial period to avoid being charged for the paid subscription.
There are no credits or refunds for partial or unused months if you forget to cancel your free trial before it ends.
To cancel Free Trial:
8th Wall subscriptions automatically renew until you cancel. There are no refunds or credits for partial or unused months.
To cancel an existing plan:
Note: You cannot cancel a Pro subscription if the workspace has any actice commercial licenses. You first need to cancel your commercial licenses (which will take the projects offline) and then you can set a Pro subscription to cancel.
8th Wall plan subscriptions can be paid monthly or annually.
Note: Changes to your billing interval will take effect for the next billing cycle.
To switch between monthly and annual billing, please follow these instructions:
8th Wall subscriptions automatically renew until you cancel. There are no refunds or credits for partial or unused months.
Note:
To upgrade or downgrade to a different paid plan:
Please refer to https://www.8thwall.com/pricing for detailed information on plans and pricing.
For licensing inquiries, please contact the 8th Wall team by filling out the form at https://www.8thwall.com/licensing
If your free trial or subscription has ended and you wish to re-subscribe to a paid plan, please follow these steps:
Select a billing interval (1): Monthly or Annually
If you have a promotion code (2), enter it and click Apply
Select a payment method (3), or add a new payment method.
Click Complete Purchase (4) to activate your paid subscription. 8th Wall plan subscriptions automatically renew until you cancel. There are no refunds or credits for partial or unused months.
Commercial licenses and their payment methods can be managed from the Account page of your workspace. This widget will only be displayed if you have active commercial licenses. It allows you to modify the payment method of a commercial license, or cancel it immediately.
View commercial licenses
The Commercial License widget will display information about all commercial licenses within your workspace:
Cancel an active commercial license
IMPORTANT: Cancelling the license for an active commercial project will disable it and the WebAR project can no longer be viewed. This action cannot be undone! If you would prefer to schedule a future end to the license, please adjust the Project Duration settings for your project instead.
Change payment method for an active commercial license
The Payment Methods widget allows you to:
To add, remove or set a new default payment method:
On this page, you can manage your payment methods as well as the billing information you'd like to appear on your invoices.
Click "Add payment method" to add a new credit card to your account. If you would like this newly added credit card to be used for future bills, make sure to click "Make Default"
The Invoices widget on the Account page allows you to view and download invoices, and make payments for any outstanding invoices.
To access the invoices associated with your account:
The following information is displayed:
The "Billing Information" section of the Account page allows you to specify contact information you'd like to appear on future invoices and the email address you would like invoices/receipts sent to.
To update account billing information:
Note: Updated payment methods and invoice details will be used in future invoices.
This section decribes how to create, manage and publish WebAR projects.
From the Homepage (logged in) or Workspace Dashboard, click "Start a new Project"
Select the workspace for this project.
Enter Basic info for the project: Please provide: Title, URL, Description (optional) and Cover Image (optional). All of these fields, except URL, can be edited later in the Project Settings page.
Select a Project Type:
Commercial: Commercial projects are intended for commercial use. You can develop unlimited commercial projects with your plan at no additional charge. When you’re ready to launch a commercial project so that the world can see it, you will need to purchase a monthly Commercial License which varies based on views. NOTE: Commercial projects cannot be purchased during a free trial. If you need to purchase a commercial license, you can end your free trial early and begin your paid subscription.
Non-Commercial: Your paid subscription allows you to develop and publish unlimited non-commercial projects. A "Non-Commercial" label will appear on the loading screen. If you decide to commercialize your project at any point, switch to "Commercial" in the Project Dashboard.
Demo Use: You may create unlimited demo projects which are publicly viewable and strictly intended for pitching prospective work. A "Demo Use Only" label will appear on the loading screen. If you decide to commercialize your project at any point, switch to "Commercial" in the Project Dashboard.
Educational Use: Educational projects are intended for educational purposes only, such as a classroom setting. You can develop and publish unlimited educational projects. An "Educational Use" label will appear on the loading screen. If your project is intended for commercial use, you must select "Commercial". If you are an educational institution please contact 8th Wall for information on a custom Education plan.
The project dashboard is your hub for managing 8th Wall projects. From the project dashboard page you can manage project settings, access the 8th Wall Cloud Editor, purchase commercial licenses, manage image targets, setup custom domains, and more.
The direct URL to your Project Dashboard is in the format: www.8thwall.com/workspacename/projectname
Project Dashboard Overview
8th Wall Projects fall into one of the following categories:
Commercial: Commercial projects are intended for commercial use. You can develop unlimited commercial projects with your plan at no additional charge. When you’re ready to launch a commercial project so that the world can see it, you will need to purchase a monthly Commercial License which varies based on views. NOTE: Commercial projects cannot be purchased during a free trial. If you need to purchase a commercial license, you can end your free trial early and begin your paid subscription.
Non-Commercial: Your paid subscription allows you to develop and publish unlimited non-commercial projects. A "Non-Commercial" label will appear on the loading screen. If you decide to commercialize your project at any point, switch to "Commercial" in the Project Dashboard.
Demo Use: You may create unlimited demo projects which are publicly viewable and strictly intended for pitching prospective work. A "Demo Use Only" label will appear on the loading screen. If you decide to commercialize your project at any point, switch to "Commercial" in the Project Dashboard.
Educational Use: Educational projects are intended for educational purposes only, such as a classroom setting. You can develop and publish unlimited educational projects. An "Educational Use" label will appear on the loading screen. If your project is intended for commercial use, you must select "Commercial". If you are an educational institution please contact 8th Wall for information on a custom Education plan.
If you selected the wrong project type during initial creation, please use the Project Dashboard to change the project type as appropriate.
Follow the wizard and purchase the desired commercial license:
Select billing period. You can choose between the following options:
When a commercial license is canceled or reaches a scheduled end date, billing stops and also the project is no longer accessible.
To re-launch a project and purchase a new commercial license, please follow one these options:
From the Workspace page:
From the Project Dashboard:
To manage image targets for a given Project, click either the Image Target icon in the left navigation, or the "Manage Image Targets" link on the Project Dashboard.
For detailed information on Image Targets, please refer to the Image Target documentation.
To manage LightshipVPS Wayspots for a given project, click the map icon in the left navigation.
For detailed information on Lightship VPS for Web, please refer to the Lightship VPS for Web documentation.
When using the 8th Wall Cloud Editor to develop, the Web AR experience created is published to 8th Wall's hosting infrastructure. By default, the URL of your published Web AR experience will be in the fomat of:
"workspace-name.8thwall.app/project-name"
If you own a custom domain and want to use it with an 8th Wall hosted project instead of the default URL, you can connect the domain to your 8th Wall project with a few simple DNS configurations. To do so you'll need access to create/edit DNS records for your domain.
NOTE: Connecting custom domains to 8th Wall Hosted projects requires a paid Plus, Pro or Enterprise subscription. This feature is not available during the Free Trial period.
WARNING: It is strongly recommended that you connect a subdomain ("ar.mydomain.com") instead of the root domain ("mydomain.com" without anything in front) as not all DNS providers support CNAME/ALIAS/ANAME records for the root domain. Please contact your DNS provider to see if they support CNAME or ALIAS records for the root domain before proceeding.
Expand "Setup your domain to point to this 8th Wall-hosted project"
In Step 1 of the connected domain wizard, enter your custom domain (e.g. www.mydomain.com), in the Primary connected domain field.
Click Connect. At this point 8th Wall will generate an SSL certificate for the custom domain(s) being connected. This operation can take a few minutes, so please be patient. Click the "Refresh status" button if needed.
Next, Verify ownership of your domain. In order to verify that you are the owner of the custom domain, you must login to your DNS provider's website and add one or more verification CNAME records. Use the Copy button to ensure you properly collect the complete Host and Value records.
These DNS records can take up to 24 hours to be verified, but in most cases happens in a matter of minutes. Please be patient and click the "Refresh verification status" button periodically if needed.
When verification has completed, you'll see a green checkmark next to the verification DNS record:
Result: Connection Record Verified:
Additional notes:
ANAME
record. Assuming your DNS provider actually supports these types of records, they will not show as "connected". Your connected domain will still work as long as you have created the appropriate DNS records.It's not possible to modify the connected domain settings once defined. If you need to make changes, you'll need to:
Step 3
of the wizard.If you are on a paid Pro or Enterprise plan, you can host Web AR experiences on your own web server (and view them without device authorization). In order to do so, you will need to specify a list of domains that are approved to host your project.
From the Project Dashboard page, select "Setup domains".
Expand "Setup this project for self-hosting or local development".
Enter the domain(s) or IP(s) of the web server where you will be self-hosting the project. A domain may not contain a wildcard, path, or port. Click the "+" to add multiple.
Note: Self-Hosted domains are subdomain specific - e.g. "mydomain.com" is NOT the same as "www.mydomain.com". If you will be hosting at both mydomain.com and www.mydomain.com, you must specify BOTH.
Commercial licenses, by default, will run indefinitely and automatically renew every month until you cancel. Ending a campaign will cancel the commercial license and the WebAR project will be disabled.
Campaign Duration settings can be managed from the Project Dashboard. The following options are available:
To modify campaign duration:
To cancel the campaign immediately, visit the workspace Account page and manage commercial licenses.
When a launched project is cancelled or completed, the WebAR project can no longer be viewed. Users visiting the site will see an error message stating that the project is no longer available. It is a best practice to redirect users to another URL once your campaign is over.
Specify a Campaign Redirect URL to automatically redirect your users to a different site when your campaign has ended.
Campaign Redirect URLs are supported with both 8th Wall hosted and Self-hosted Projects.
From the Project Dashboard, click "Connect a URL" and enter the desired redirect URL
As a convenience, 8th Wall branded QR codes (aka "8 Codes") can be generated for a Project, making it easy to scan from mobile device to access your WebAR project. You are always welcome to generate your own QR codes, or use third-party QR code generation websites or services.
The QR Code on the project dashboard points to a unique "8th.io" shortlink for your project. This shortlink then redirects a user to the URL of your Web AR experience.
Both the QR Code and "8th.io" code for a given project are static and will not change based on project type or license.
8th Wall Hosted Projects (NO connected domain)
If your project is using the default 8th Wall hosted URL (in the format of "workspace-name.8thwall.app/project-name"), the QR Code and 8th.io shortlink will always redirect to the default URL. It it not possible to modify the destination URL.
8th Wall Hosted Projects (WITH connected domain)
If you have configured a connected domain for your 8th Wall hosted project, you'll have the option to set the QR Code / Shortlink destination to either the default URL of the project, or the primary connected domain.
Use the radio button to set your QR Code / Shortlink destination:
Self Hosted Projects
To generate a QR code and shortlink, enter the full URL to your self-hosted project and click Save:
The generated QR code can be downloaded in either PNG or SVG format to be included on a website, physical media, or other locations to make it easy for users to scan with their smartphones to visit the self-hosted URL. Click the pencil icon to edit the shortlink destination should the self-hosted URL change in the future.
Example:
8th Wall Projects provide basic usage analytics so that you can see how many times it has been viewed in the past 30 days. The usage graph is a rolling 30-day window and can display either total or daily usage during that time period.
Projects with usage based commercial licenses will also display view counts for the current billing period. Usage is measured in 100 view increments. Usage from previous months can be found in the Billing Summary of the Account page.
Once your Public Profile is activated, you can publish Featured Projects pages to your Public Profile page. All 8th Wall projects can be featured on a public profile including non-commercial, demo, education and commercial projects (both with active and completed licenses). All workspaces users have permission to create, save and publish featured projects. Projects can be added or removed to a Public Profile at any time. Publishing a Featured Project does not impact the live published WebAR experience.
Please see the following sections in the documentation for more information:
Project sharing, allows members of another trusted workspace to access a specific project in your workspace. There is no limit to the number of workspaces you can invite to access your project.
Members from an invited workspace can:
Members from an invited workspace cannot:
Note: Project Sharing is a feature only available to workspaces on paid Pro and Enterprise plans. Projects can be shared with workspaces on all paid plan types. (Enterprise, Pro, Plus, Starter)
Share project with another workspace:
OWNER
or ADMIN
permissions to a workspace on a paid Pro or Enterprise plan.OWNER
and ADMIN
users from the invited workspace. The invitation must be accepted within seven days or it will expire.Remove workspace access from a shared project:
OWNER
or ADMIN
permissions to a workspace on a paid Pro or Enterprise plan.Accessing a project shared by another workspace:
If you have accepted an invite to projects owned by other workspaces, they can be found under the External Projects tab of your Workspace:
The Project Settings page allows you to:
Edit Project information:
The following Code Editor preferences can be set:
Dark Mode (On/Off)
Keybindings
Enable keybindings from popular text editors. Select from:
Project Settings allows you to edit the Basic Information for your Project
Project Title
Description
Enable/Disable default splash screen
Update cover image
When your app is staged to XXXXX.staging.8thwall.app (where XXXX represents your Workspace URL), it is passcode protected. In order to view the WebAR Project a user must first enter the passcode you provide. This is a great way to preview projects with clients or other stakeholders prior to launching publicly.
A passcode should be 5 or more characters and can include letters (A-Z, lower or upper case), numbers (0-9) and spaces.
Self-hosted Projects require an App Key to be added to the code. Self-hosting and App Keys are only available to paid Pro or Enterprise workspaces. App Keys are not available during the free trial period or to Starter/Plus plans.
To aceess the app key for a project:
Click the Copy button and then paste it into your index.html
<script>
tag in the <head>
of your self-hosted code<!-- Replace the X's with your App Key --> <script async src="//apps.8thwall.com/xrweb?appKey=XXXXX"></script>
You can specify the version of the 8th Wall engine used when serving public web clients (Release
or Beta
).
Users viewing a published experience will always be served the most recent version of 8th Wall Engine from that channel.
In general, 8th Wall recommends using the official release channel for production web apps.
If you would like to test your web app against a pre-release version of 8th Wall's Engine, which may contain new features and/or bug fixes that haven't gone through full QA yet, you can switch to the beta channel. Commercial experiences should not be launched on the beta channel.
Freezing Engine Version
NOTE: Engine version freezing is only available to Commercial projects with an active license.
To Freeze the current engine version, select the desired Channel (release or beta) and click the Freeze button.
To Re-join a Channel and stay up-to-date, click the Unfreeze button. This will unfreeze the Engine Version associated with your Project and re-join a Channel (release or beta) to use the latest version available to that channel.
Unpublishing your project will remove it from staging (XXXXX.staging.8thwall.app) or public (XXXXX.8thwall.app).
You can publish it again at any time from the Code Editor or Project History pages.
Click Unpublish Staging to take your Project down from XXXXX.staging.8thwall.app
Click Unpublish Public to take your Project down from XXXXX.8thwall.app
If you disable your project, your app will not be viewable. Views will not be counted while disabled.
You will still be charged for any active commercial licenses on projects that are temporily disabled.
Toggle the slider to Disable / Enable your project.
A project with a commercial license cannot be deleted. Visit the Account page to cancel an active commercial project.
Deleting an Project will cause it to stop working. You cannot undo this operation.
Bring signage, magazines, boxes, bottles, cups, and cans to life with 8th Wall Image Targets. 8th Wall Web can detect and track flat, cylindrical and conical shaped image targets, allowing you to bring static content to life.
Not only can your designated image target trigger a web AR experience, but your content also has the ability to track directly to it.
Image targets can work in tandem with our World Tracking (SLAM), enabling experiences that combine image targets and markerless tracking.
You may track up to 5 image targets simultaneously with World Tracking enabled or up to 10 when it is disabled.
Up to 5 image targets per project can be "Autoloaded". An Autoloaded image target is enabled immediately as the page loads. This is useful for apps that use 5 or fewer image targets such as product packaging, a movie poster or business card.
The set of active image targets can be changed at any time by calling XR8.XrController.configure(). This lets you manage hundreds of image targets per project making possible use cases like geo-fenced image target hunts, AR books, guided art museum tours and much more. If your project utilizes SLAM most of the time but image targets some of the time, you can improve performance by only loading image targets when you need them. You can even read uploaded target names from URL parameters stored in different QR Codes, allowing you to have different targets initially load in the same web app depending on which QR Codes the user scans to enter the experience.
Flat | ![]() |
Track 2D images like posters, signs, magazines, boxes, etc. |
Cylindrical | ![]() |
Track images wrapped around cylindrical items like cans and bottles. |
Conical | ![]() |
Track images wrapped around objects with different a top vs bottom circumference like coffee cups, etc. |
Dimensions:
Maximum length or width: 2048 pixels.
There is no limit to the number of image targets that can be associated with a project, however, there are limits to the number of image targets that can be active at any given time.
Up to 5 image targets can be active simultaneously while World Tracking (SLAM) is enabled. If World Tracking (SLAM) is disabled (by setting "disableWorldTracking: true") you may have up to 10 simultaneously active image targets.
Click the Image Target icon in the left navigation or the "Manage Image Targets" link on the Project Dashboard to manage your image targets.
This screen allows you to create, edit, and delete the image targets associated with your project. Click on an existing image target to edit. Click the "+" icon for the desired image target type to create a new one.
Upload Flat Image Target: Drag your image (.jpg, .jpeg or .png) into the upload panel, or click within the dotted region and use your file browser to select your image.
Set Tracking Region (and Orientation): Use the slider to set the region of the image that will be used to detect and track your target within the WebAR experience. The rest of the image will be discarded, and the region which you specify will be tracked in your experience.
Upload Flat Image Target: Drag your image (.jpg, .jpeg or .png) into the upload panel, or click within the dotted region and use your file browser to select your image.
Set Tracking Region (and Orientation): Use the slider to set the region of the image that will be used to detect and track your target within the WebAR experience. The rest of the image will be discarded, and the region which you specify will be tracked in your experience.
Set Small Arc Alignment: Do the same for the small arc. Drag the slider until the blue line overlays the uploaded image's small arc.
Set Tracking Region (and Orientation): Drag and zoom on the image to set the portion of the image that is detected and tracked. This should be the most feature rich area of your image.
Click on any of the image targets under My Image Targets to view and/or modify their properties:
Type | Fields |
---|---|
Flat | ![]() |
Cylindrical | ![]() |
Conical | ![]() |
The set of active image targets can be modified at runtime by calling XR8.XrController.configure()
Note: The set of currently active image targets will be replaced with the new set passwd to XR8.XrController.configure().
XR8.XrController.configure({imageTargets: ['image-target1', 'image-target2', 'image-target3']})
To ensure the highest quality image target tracking experience, be sure to follow these guidelines when selecting an image target.
DO have:
DON'T have:
Color: Image target detection cannot distinguish between colors, so don't rely on it as a key differentiator between targets.
For best results, use images on flat, cylindrical or conical surfaces for image target tracking.
Consider the reflectivity of your image target's physical material. Glossy surfaces and screen reflections can lower tracking quality. Use matte materials in diffuse lighting conditions for optimal tracking quality.
Note: Detection happens fastest in the center of the screen.
Good Markers | Bad Markers |
---|---|
![]() |
![]() |
![]() |
![]() |
8th Wall Web emits Events / Observables for various events in the image target lifecycle (e.g. imageloading, imagescaning, imagefound, imageupdated, imagelost) Please see the API reference for instructions on handling these events in your Web Application:
Example Projects
https://github.com/8thwall/web/tree/master/examples/aframe/artgallery
https://github.com/8thwall/web/tree/master/examples/aframe/flyer
With the Lightship Visual Positioning System (VPS) 8th Wall developers now have the power to determine a user's position and orientation with centimeter-level accuracy - in seconds. Using the 8th Wall platform, you can use Lightship VPS in your WebAR projects to create location-based web AR experiences that connect the real world with the digital one. WebAR content can be anchored to locations, enabling virtual objects to interact with the space they are in. This makes the augmented reality experience feel more personal, more meaningful, more real, and gives users new reasons to explore the world around them.
The Geospatial Browser can be accessed from within your Project by selecting the map icon in the left hand menu (annotated as #1 in the image below). On this page you will find a map view (#2) which you can use to search to find VPS-activated Niantic Wayspots. Selecting a VPS-activated location will display the 3D mesh of the location (#3) so you can verify you have selected the correct location and add it to your project (#4).
When you add a VPS-activated Wayspot to your project you will see a Wayspot in the "Project Wayspots" table in the Geospatial Browser (annotated as #1 in the image below). Once you have a Wayspot in the “Project Wayspots” table you can use the "Download" button (#2) to download a GLB or OBJ (toggle shown as #3) version of the 3D mesh and open it in third-party 3D software applications, such as Blender, or import it directly into your 8th Wall project. When referencing Wayspots in your project code you will need to copy the "Name" field (#4) from the "Project Wayspots" table.
If the location you'd like to use in your project is not available as a Wayspot, you can submit Wayspot locations to Niantic by following the instructions in the Create New Wayspot section.
Select a location on the map where you want to create a new Wayspot. (see Wayspot Requirements to learn more about choosing a good location to create a Wayspot).
Create Wayspot: Click the "Create Wayspot" button to start the process to create a new Wayspot.
iOS
The Niantic Wayfarer App requires iOS 12 or later and an iPhone 8 or later. A LiDAR-capable device is not required.
To install the Niantic Wayfarer App, go to Testflight for Niantic Wayfarer (8th.io/wayfarer-ios) on your iOS device.
Android (Beta)
The Niantic Wayfarer App requires the ARCore package.
To install the Niantic Wayfarer App, go to Niantic Wayfarer (8th.io/wayfarer-android) on your Android device.
You can add scans to public Wayspots as well as create private scans with the Niantic Wayfarer App.
Once you have installed the app, login with your 8th Wall credentials by pressing the Login with 8th Wall button.
If you have access to multiple workspaces, select a workspace by pressing the 8th Wall Workspace dropdown on the profile page.
Login Page | Profile Page |
---|---|
![]() |
![]() |
On the Map page, select a Wayspot to add a scan to a public wayspot (1), or select Scan to add a private scan to your workspace (2).
Take a scan of the area using the recommended scanning technique.
Map Page | Scanning |
---|---|
![]() |
![]() |
Once the scan has been completed, select either public or private, and then upload.
Scan Type | Scan Upload |
---|---|
![]() |
![]() |
Processing scans can take 15-30 minutes. Once processed, scans will populate in the geospatial browser.
Issues related to scanning or processing should be directed to support@lightship.dev.
You can find more information on how to use the Wayfarer app in the Lightship documentation.
Scanned VPS-activated locations should be no larger than a 10-meter diameter around the location. For example, a typical statue would work as a VPS-activated Wayspot. An entire building, however, would not. One face or doorway/entrance into a building might work. We recommend sticking with smaller areas for starters (e.g. a desk, statue, or mural).
Before scanning, be aware of your surroundings and ensure you have the right to access the location you are scanning.
Video of recommended Wayspot scanning technique:
Things to avoid while scanning
Private scans are a single mesh, available to only one workspace, to develop and test VPS experiences. While private scans are a great solution for developing and testing VPS experiences while a public wayspot is being nominated or activated, they are not authorized for use in published projects.
Private scans are created using the Niantic Wayfarer app. Ensure you’re logged in to Wayfarer using 8th Wall credentials and that the correct workspace is selected from the Profile page. The private scan will only be available in the selected 8th Wall workspace at the time of scanning and uploading. Scans can not be moved to a different workspace or Lightship account.
In the Wayfarer app, select Scan and take a scan of the area.
Private scans should be 60 seconds or less; a new mesh is generated every 60 seconds – so scanning for 120 seconds will result in 2 private scans. All private scans are unaligned.
Once processed, you can preview the mesh and add it to your project from the geospatial browser Private Scans tab.
If your private scan fails processing, you may need to rescan. Reach out to support@lightship.dev for more information.
In the Geospatial Browser, you will see four different types of Wayspots:
Type | Icon | Description |
---|---|---|
Public | ![]() |
"Public" Wayspots have been approved by Niantic's Trust & Safety team and have met the required criteria of safety and public accessibility. These Wayspots may be used in published projects. |
Pending | ![]() |
"Pending" Wayspots are being reviewed by Niantic's Trust & Safety team to determine if they meet the required criteria of safety and public accessibility. This process can take up to 2 business days. Pending Wayspots can be scanned and activated while waiting for the review to complete. |
Rejected | ![]() |
"Rejected" Wayspots may have failed Niantic's Trust & Safety review, be a duplicate of an existing or previously rejected Wayspot, or may not be allowed by Niantic for another reason. These Wayspots cannot be added to projects. |
Private | ![]() |
"Private" Waystpots are only accessible to your Workspace by scanning the location using Niantic's Wayfarer app. Private Wayspots are intended for use during development and may not be included in a published project. |
For questions or issues related to creating Wayspots, or status of existing Wayspots, please contact support@lightship.dev
In the Geospatial Browser, you will see five different statuses for Wayspots:
Status | Icon | Description |
---|---|---|
Not Activated | ![]() |
Wayspots with a status of 'Not Activated' have not had any scans submitted for the location. A minimum of 10 viable scans must be submitted for the location before you will be able to request activation. After one scan is submitted the Wayspot status will change to 'Scanning'. |
Scanning | ![]() |
Wayspots with a status of 'Scanning' have had at least one scan submitted for the location. A minimum of 10 viable scans must be submitted for the location before you will be able to request activation. |
Processing | ![]() |
Wayspots with a status of 'Processing' have had an activation request submitted and will display the 'Processing' status until the activation process has completed. Please allow up to 7 business days for the mapping process to complete. You will receive and email when the process is complete. |
Active | ![]() |
Wayspots with a status of 'Active' are available to be used in projects to create WebAR content using Lightship VPS for Web. |
Failed | ![]() |
Wayspots with a status of 'Failed' encountered an issue during the activation process. This could be a result of a number of factors, such as poor suitability of the location for VPS, insufficient scans, or corrupt data. Unfortunately this means that this Wayspot cannot be used to create WebAR content using Lightship VPS. We encourage you to find a new Wayspot to use in your 8th Wall project. |
For questions or issues related to Wayspot scanning, activating or status, please contact support@lightship.dev
Wayspots will only be approved and made publicly available if they meet the following criteria:
Wayspots perform better on Lightship VPS, when they also meet the following criteria:
There is no limit to the number of Wayspots that can be associated with an 8th Wall project. Wayspots are localized server side via the Lightship VPS service.
8th Wall emits events at various stages in the Project Wayspot lifecycle (e.g. scanning, found, updated, lost, etc). Please see the API reference for specific instructions on handling these events in your web application:
After a wayspot has been VPS activated, Niantic provides a localizability rating in the Geospatial browser. Wayspot details display either Low Localizability or High Localizability.
Localizability refers to the wayspot’s ability to localize at any time. Wayspots with several scans in all types of lighting tend to have a high localizability. Wayspots with minimum required scans or a majority of scans in one type of lighting tend to have a low localizability.
Localizability rating is an automated process and may not reflect the actual performance of the Wayspot. The best way to determine localizability is to try it out yourself.
The unaligned warning can happen for various reasons and means localization against the mesh can not be guaranteed. Although the mesh may work well for localization, the warning indicates the mesh is experimental and should be used at your own risk.
Note that all private scans are unaligned.
To enable Lightship VPS in your WebAR project, you'll need to set enableVPS
to true
.
For A-Frame projects, set enableVps: true
on the xrweb
component on the <a-scene>
For Non-AFrame projects, set enableVps: true
in the call to XR8.XrController.configure()
prior to engine start.
<a-scene coaching-overlay landing-page xrextras-loading xrextras-runtime-error ... xrweb="enableVps: true;">
XR8.XrController.configure({enableVps: true})
// Then, start the 8th Wall engine
8th Wall Modules is a powerful new feature of the 8th Wall Cloud Editor designed to dramatically increase the efficiency of project development. 8th Wall Modules will allow you to save and reuse components (code, assets, files) within your Workspace and also find and import 8th Wall created Modules into your project.
8th Wall Modules aim to:
Modules enable you to add modularized assets, files, and code and import them into your projects with a versioning system. This allows you to focus your project code on key differentiators and easily import common functionality via a module that you create.
To create a new module in your workspace:
You can also create a new module directly within the context of a project. Within your Cloud Editor project, press the "+" button next to Modules. Then press "Create New Modules" and continue with the instructions below.
Enter Basic info for the module: Please provide a Module ID (This ID appears in your workspace url and can be used to reference your module in project code.), and Module Title. The Module Title can be edited later in the Module Settings page.
Once you have created your module, you’ll be taken to the module.js file within the Cloud Editor. From here you can begin developing your modules. More details on module development can be found in the Developing your Module section.
Module development is slightly different from project development. Modules cannot be run on their own and can only be run after being imported into a project. Modules can be developed within a module specific view of the Cloud Editor, or within the context of a project. Modules that you develop are only available to the workspace they are developed in.
When developing a module within the module specific view you will not see a “Preview” button on the top navigation of the Cloud Editor since modules can only be previewed when imported into a project.
The main components of a module include:
manifest.json
Within manifest.json
you can create parameters that are editable via a visual configurator when modules are imported into projects. Your module.js
code can subscribe to the parameters you make available in the module manifest to dynamically change based on user input when configuring the module within the context of a project.
The module config builder automatically starts with one parameter group available. Parameter groups can be used for logical divisions of parameters which are then expressed and grouped visually when using your module in a project.
String
, Number
, Boolean
, & Resource
.NOTE: The order of config groups, and of parameters within these groups, dictates the order that is displayed to users when using a module within a project. You can easily reorder parameters within a group, as well as reorder config groups by dragging them in the order that you want. To switch a parameter from one group to another group press the arrow icon on the parameter field and select the group you want to move the parameter to from the dropdown.
If you are creating a module manifest for your module you will be able to select from different parameter types including String
, Number
, Boolean
, & Resource
. Details on each parameter type:
String
String parameters have the following editable fields:
Parameter Fields | Type | Description |
---|---|---|
Label (1) | String | A human readable name for your parameter that will be displayed in the configuration UI when the module is imported into a project. The default is dynamically generated based on the parameter name. |
Default Optional | String | The default string value if none is specified when the module is imported into a project. The default is "". |
Number
Number parameters have the following editable fields:
Parameter Fields | Type | Description |
---|---|---|
Label (1) | String | A human readable name for your parameter that will be displayed in the configuration UI when the module is imported into a project. The default is dynamically generated based on the parameter name. |
Default Optional | Number | The default number value if none is specified when the module is imported into a project. The default is null . |
Min Optional | Number | The maximum number value a user can input when the module is imported into a project. The default is null . |
Max Optional | Number | The minimum number value a user can input when the module is imported into a project. The default is null . |
Boolean
Boolean parameters have the following editable fields:
Parameter Fields | Type | Description |
---|---|---|
Label (1) | String | A human readable name for your parameter that will be displayed in the configuration UI when the module is imported into a project. The default is dynamically generated based on the parameter name. |
Default Optional | Boolean | The default boolean value if none is specified when the module is imported into a project. The default is false . |
Label if True Optional | String | The label for the true boolean option that will be displayed in the configuration UI when the module is imported into a project. The default is true . |
Label if False Optional | String | The label for the false boolean option that will be displayed in the configuration UI when the module is imported into a project. The default is false . |
Resource
Resource parameters have the following editable fields:
Parameter Fields | Type | Description |
---|---|---|
Label (1) | String | A human readable name for your parameter that will be displayed in the configuration UI when the module is imported into a project. The default is dynamically generated based on the parameter name. |
Allow None (2) | Boolean | Enables/disables the ability to explicitly set the resource to null from the configuration UI when the module is imported into the project. The default is false . |
Allowed Asset Extensions Optional | File Types | Enables the ability to upload specified file types via the displayed in the configuration UI when the module is imported into a project. The default is all file types. |
Default Resource Optional | File | The default resource if none is specified when the module is imported into a project. The default is null . |
module.js
module.js
is the main entry point for your 8th Wall module. Code in module.js
will execute before the project is loaded. You can also add other files and assets and reference them within module.js
.
Modules can be very different depending on their purpose, and your development style. Typically modules contain some of the following elements:
import {subscribe} from 'config' // config is how you access your module options
subscribe((config) => {
// Your code does something with the config here
})
export {
// Export properties here
}
readme.md
You can include a readme in your module simply by creating a file named readme.md
in your module's file directory. Just like project readme module readmes can be formatted using markdown and can include assets like pictures and video.
NOTE: If your module has a readme it will automatically be packaged with the module when you deploy a version. This appropriate module readme will be shown in-context to the module depending on the version of the module being used in the project.
You can enable development mode within the context of a project on modules owned by your workspace by toggling “Development Mode” (shown in red in the image below) on the module configuration page. Once Development mode is enabled the modules underlying code and files will become visible in the left side-pane.
When a module is in Development Mode within the context of a project you will see additional options on the configuration page including: module client controls (in teal), a module deployment button (in pink), and an "Edit mode" toggle to switch between editing the content of the visual configuration page and using the configuration.
When you are developing modules within the context of a project and have changes to land you will see a land flow that takes you through project and module changes. You can choose whether or not to land specific changes. Any project or module that has changes that you are landing must have a commit message added before you will be able to complete landing your code.
When you are developing modules within the context of a project and have changes you will also notice update to the Abandon & Revert changes options in the cloud editor. You can choose whether or not to Abandon/Revert only project changes or changes to both your project and any modules in development.
Initial Module Deployment
Deploying modules enables you to share stable versions, while allowing projects to subscribe to module updates within a version family. This can allow you to push non-breaking module updates to your projects automatically.
To deploy a module for the first time:
Deploying module updates is similar to deploying a module for the first time with two additional deployment options.
Version Type: When deploying a module update you will be prompted to choose whether the update is a bug fix, new feature, or major release.
When there is a pre-release active, you can continue to update the pre-release version until you either promote the pre-release, or abandon it.
To edit a module pre-release:
Modules enable you to add reusable components to your project, allowing you to focus on the development of your core experience. The 8th Wall Cloud Editor allows you to import modules your own modules, or modules published by 8th Wall directly into your projects.
To import a module into your Cloud Editor project:
Press "Public Modules" to important a module created by 8th Wall, or "My Modules" to import a module created by a member of your workspace.
Select the module that you want to import from the list.
Press "Import" to add your module to your project. Take note of the module alias. If you already have a module in your project with the same alias, you may need to rename your module.
The module is now visible in your project listed under the "Modules" section.
Landing Pages are an evolution of our popular "Almost There" pages.
Why Use Landing Pages?
We have transformed these pages to become powerful branding and marketing opportunities for you and your clients. All Landing Page templates are optimized for branding and education with various layouts, an improved QR code design and support for key media.
Landing Pages ensure that your users have a meaningful experience no matter what device they are on. - They appear on devices that are not allowed or capable of accessing the Web AR experience directly. They also continue our mission of making AR accessible by helping users get to the right destination to engage with AR.
We designed Landing Pages in a manner which makes it extremely easy for developers to customize the page. We want you to benefit from an optimized Landing Page while still enabling you to spend your time on building your WebAR experience.
Landing Pages Intelligently Adapt To Your Configuration:
Use Landing Pages in Your Project:
head.html
<meta name="8thwall:package" content="@8thwall.landing-page">
Note: For Self-Hosted projects, you would add the following <script>
tag to your page instead:
<script src='https://cdn.8thwall.com/web/landing-page/landing-page.js'></script>
Remove xrextras-almost-there
from your A-Frame project, or XRExtras.AlmostThere.pipelineModule()
from your Non-AFrame project. (Landing Pages include almost-there logic in addition to the updates to the QR code page.)
Optionally, customize the parameters of your landing-page
component as defined below. For Non-AFrame projects, please refer to the LandingPage.configure() documentation.
A-Frame component parameters (All Optional)
Parameter | Type | Default | Description |
---|---|---|---|
logoSrc | String | Image source for brand logo image. | |
logoAlt | String | "Logo" | Alt text for brand logo image. |
promptPrefix | String | "Scan or visit" | Sets the text string for call to action before the URL for the experience is displayed. |
url | String | 8th.io link if 8th Wall hosted, or current page | Sets the displayed URL and QR code. |
promptSuffix | String | "to continue" | Sets the text string for call to action after the URL for the experience is displayed. |
textColor | Hex Color | "#ffffff" | Color of all the text on the Landing Page. |
font | String | "'Nunito', sans-serif" | Font of all text on the Landing Page. This parameter accepts valid CSS font-family arguments. |
textShadow | Bool | false | Sets text-shadow property for all text on the Landing Page. |
backgroundSrc | String | Image source for background image. | |
backgroundBlur | Number | 0.0 | Applies a blur effect to the backgroundSrc if one is specified. (Typically values are between 0.0 and 1.0) |
backgroundColor | String | linear-gradient(#464766,#2D2E43) | Background color of the Landing Page. This parameter accepts valid CSS background-color arguments. Background color is not displayed if a background-src or sceneEnvMap is set. |
mediaSrc | String | App’s cover image, if present | Media source (3D model, image, or video) for landing page hero content. Accepted media sources include a-asset-item id, or URL. |
mediaAlt | String | "Preview" | Alt text for landing page image content. |
mediaAutoplay | Bool | true | If the mediaSrc is a video, specifies if the video should be played on load with sound muted. |
mediaAnimation | String | [First animation clip of model if present] | If the mediaSrc is a 3D model, specify whether to play a specific animation clip associated with the model, or "none". |
mediaControls | String | "minimal" | If mediaSrc is a video, specify media controls displayed to to user. Choose from "none", "mininal" or "browser" (browser defaults) |
sceneEnvMap | String | "field" | Image source pointing to an equirectangular image. Or one of the following preset environments: "field", "hill", "city", "pastel", or "space". |
sceneOrbitIdle | String | "spin" | If the mediaSrc is a 3D model, specify whether the model should "spin", or "none". |
sceneOrbitInteraction | String | "drag" | If the mediaSrc is a 3D model, specify whether the user can interact with the orbit controls, choose "drag", or "none". |
sceneLightingIntensity | Number | 1.0 | If the mediaSrc is a 3D model, specify the strength of the light illuminating the mode. |
vrPromptPrefix | String | "or visit" | Sets the text string for call to action before the URL for the experience is displayed on VR headsets. |
Example - 3D Layout with user specified parameters
<a-scene landing-page=" mediaSrc: https://www.mydomain.com/bat.glb; sceneEnvMap: hill" xrextras-loading xrextras-gesture-detector ... xrweb>
// Configured here
LandingPage.configure({
mediaSrc: 'https://www.mydomain.com/bat.glb',
sceneEnvMap: 'hill',
})
XR8.addCameraPipelineModules([
XR8.GlTextureRenderer.pipelineModule(),
XR8.Threejs.pipelineModule(),
XR8.XrController.pipelineModule(),
XRExtras.FullWindowCanvas.pipelineModule(),
XRExtras.Loading.pipelineModule(),
XRExtras.RuntimeError.pipelineModule(),
// Added here
LandingPage.pipelineModule(),
...
])
Why Use the Coaching Overlay?
The Coaching Overlay onboards users to absolute scale experiences ensuring that they collect the best possible data to determine scale. We designed the Coaching Overlay to make it easily customizable by developers, enabling you to focus your time on building your WebAR experience.
Use Coaching Overlay in Your Project:
head.html
<meta name="8thwall:package" content="@8thwall.coaching-overlay">
Note: For Self-Hosted projects, you would add the following <script>
tag to your page instead:
<script src='https://cdn.8thwall.com/web/coaching-overlay/coaching-overlay.js'></script>
coaching-overlay
component as defined below. For Non-AFrame projects, please refer to the CoachingOverlay.configure() documentation.A-Frame component parameters (All Optional)
Parameter | Type | Default | Description |
---|---|---|---|
animationColor | String | "white" | Color of the coaching overlay animation. This parameter accepts valid CSS color arguments. |
promptColor | String | "white" | Color of all the coaching overlay text. This parameter accepts valid CSS color arguments. |
promptText | String | "Move device forward and back" | Sets the text string for the animation explainer text that informs users of the motion they need to make to generate scale. |
disablePrompt | Boolean | false | Set to true to hide default coaching overlay in order to use coaching overlay events for a custom overlay. |
Creating a custom Coaching Overlay for your project
Coaching Overlay can be added as a pipeline module and then hidden (using the disablePrompt
parameter) so that you can easily use the coaching overlay events to trigger a custom overlay.
Coaching overlay events are only fired when scale
is set to absolute
. Events are emitted by the 8th Wall camera run loop and can be listened to from within a pipeline module. These events include:
coaching-overlay.show
: event is triggered when the coaching overlay should be shown.coaching-overlay.hide
: event is triggered when the coaching overlay should be hidden.Example - Coaching Overlay with user specified parameters
<a-scene coaching-overlay=" animationColor: #E86FFF; promptText: To generate scale push your phone forward and then pull back;" xrextras-loading xrextras-gesture-detector ... xrweb="scale: absolute;">
// Configured here
CoachingOverlay.configure({
animationColor: '#E86FFF',
promptText: 'To generate scale push your phone forward and then pull back',
})
XR8.addCameraPipelineModules([
XR8.GlTextureRenderer.pipelineModule(),
XR8.Threejs.pipelineModule(),
XR8.XrController.pipelineModule(),
XRExtras.FullWindowCanvas.pipelineModule(),
XRExtras.Loading.pipelineModule(),
XRExtras.RuntimeError.pipelineModule(),
LandingPage.pipelineModule(),
// Added here
CoachingOverlay.pipelineModule(),
...
])
this.el.sceneEl.addEventListener('coaching-overlay.show', () => {
// Do something
})
this.el.sceneEl.addEventListener('coaching-overlay.hide', () => {
// Do something
})
const myShow = () => {
console.log('EXAMPLE: COACHING OVERLAY - SHOW')
}
const myHide = () => {
console.log('EXAMPLE: COACHING OVERLAY - HIDE')
}
const myPipelineModule = {
name: 'my-coaching-overlay',
listeners: [
{event: 'coaching-overlay.show', process: myShow},
{event: 'coaching-overlay.hide', process: myHide},
],
}
const onxrloaded = () => {
XR8.addCameraPipelineModule(myPipelineModule)
}
window.XR8 ? onxrloaded() : window.addEventListener('xrloaded', onxrloaded)
Why Use the Lightship VPS Coaching Overlay?
The Coaching Overlay onboards users to Lightship VPS experiences ensuring that they properly localize at real-world locations. We designed the Coaching Overlay to make it easily customizable by developers, enabling you to focus your time on building your WebAR experience.
Use Coaching Overlay in Your Project:
head.html
<meta name="8thwall:package" content="@8thwall.coaching-overlay">
Note: For Self-Hosted projects, you would add the following <script>
tag to your page instead:
<script src='https://cdn.8thwall.com/web/coaching-overlay/coaching-overlay.js'></script>
coaching-overlay
component as defined below. For Non-AFrame projects, please refer to the VpsCoachingOverlay.configure() documentation.A-Frame component parameters (All Optional)
Parameter | Type | Default | Description |
---|---|---|---|
wayspot-name | String | The name of the Wayspot which the coaching overlay is guiding the user to localize at. If no Wayspot name is specified, it will use the nearest project Wayspot. If the project does not have any project Wayspots, then it will use the nearest wayspot. | |
hint-image | String | Image displayed to the user to guide them to the real-world location. If no hint-image is specified, it will use the default image for the Wayspot. If the Wayspot does not have a default image, no image will be shown. | |
animation-color | String | "#FFFFFF" | Color of the coaching overlay animation. This parameter accepts valid CSS color arguments. |
animation-duration | Number | 5000 | Total time the hint image is displayed before shrinking (in milliseconds). |
text-color | String | "#FFFFFF" | Color of all the coaching overlay text. This parameter accepts valid CSS color arguments. |
prompt-prefix | String | "Point your camera at" | Sets the text string for advised user action above the name of the Wayspot. |
prompt-suffix | String | "and move around" | Sets the text string for advised user action below the name of the Wayspot. |
status-text | String | "Scanning..." | Sets the text string that is displayed below the hint-image when it is in the shrunken state. |
disable-prompt | Boolean | false | Set to true to hide default coaching overlay in order to use coaching overlay events for a custom overlay. |
Creating a custom Coaching Overlay for your project
Coaching Overlay can be added as a pipeline module and then hidden (using the disablePrompt
parameter) so that you can easily use the coaching overlay events to trigger a custom overlay.
Lightship VPS Coaching Overlay events are only fired when enableVps
is set to true
. Events are emitted by the 8th Wall camera run loop and can be listened to from within a pipeline module. These events include:
vps-coaching-overlay.show
: event is triggered when the coaching overlay should be shown.vps-coaching-overlay.hide
: event is triggered when the coaching overlay should be hidden.Example - Coaching Overlay with user specified parameters
<a-scene vps-coaching-overlay=" text-color: #0000FF; prompt-prefix: Go look for;" xrextras-loading xrextras-gesture-detector ... xrweb="vpsEnabled: true;">
// Configured here
VpsCoachingOverlay.configure({
textColor: '#0000FF',
promptPrefix: 'Go look for',
})
XR8.addCameraPipelineModules([
XR8.GlTextureRenderer.pipelineModule(),
XR8.Threejs.pipelineModule(),
XR8.XrController.pipelineModule(),
XRExtras.FullWindowCanvas.pipelineModule(),
XRExtras.Loading.pipelineModule(),
XRExtras.RuntimeError.pipelineModule(),
LandingPage.pipelineModule(),
// Added here
VpsCoachingOverlay.pipelineModule(),
...
])
this.el.sceneEl.addEventListener('vps-coaching-overlay.show', () => {
// Do something
})
this.el.sceneEl.addEventListener('vps-coaching-overlay.hide', () => {
// Do something
})
const myShow = () => {
console.log('EXAMPLE: VPS COACHING OVERLAY - SHOW')
}
const myHide = () => {
console.log('EXAMPLE: VPS COACHING OVERLAY - HIDE')
}
const myPipelineModule = {
name: 'my-coaching-overlay',
listeners: [
{event: 'vps-coaching-overlay.show', process: myShow},
{event: 'vps-coaching-overlay.hide', process: myHide},
],
}
const onxrloaded = () => {
XR8.addCameraPipelineModule(myPipelineModule)
}
window.XR8 ? onxrloaded() : window.addEventListener('xrloaded', onxrloaded)
Why Use the Sky Effects Coaching Overlay?
The Coaching Overlay onboards users to Sky Effects experiences ensuring that they are pointing their device at the sky. We designed the Coaching Overlay to make it easily customizable by developers, enabling you to focus your time on building your WebAR experience.
Use Coaching Overlay in Your Project:
head.html
<meta name="8thwall:package" content="@8thwall.coaching-overlay">
Note: For Self-Hosted projects, you would add the following <script>
tag to your page instead:
<script src='https://cdn.8thwall.com/web/coaching-overlay/coaching-overlay.js'></script>
sky-coaching-overlay
component as defined below. For Non-AFrame projects, please refer to the SkyCoachingOverlay.configure() documentation.A-Frame component parameters (All Optional)
Parameter | Type | Default | Description |
---|---|---|---|
animationColor | String | "white" | Color of the coaching overlay animation. This parameter accepts valid CSS color arguments. |
promptColor | String | "white" | Color of all the coaching overlay text. This parameter accepts valid CSS color arguments. |
promptText | String | "Point your phone towards the sky" | Sets the text string for the animation explainer text that informs users of the motion they need to make. |
disablePrompt | Boolean | false | Set to true to hide default coaching overlay in order to use coaching overlay events for a custom overlay. |
autoByThreshold | Boolean | true | Automatically show/hide the overlay based percentage of sky pixel is above/below hideThreshold / showThreshold |
showThreshold | Number | 0.1 | Show a currently hidden overlay if percentage of sky pixel is below this threshold. |
hideThreshold | Number | 0.05 | Hide a currently shown overlay if percentage of sky pixel is above this threshold. |
Creating a custom Coaching Overlay for your project
Sky Coaching Overlay can be added as a pipeline module and then hidden (using the disablePrompt
parameter) so that you can easily use the coaching overlay events to trigger a custom overlay.
sky-coaching-overlay.show
: event is triggered when the coaching overlay should be shown.sky-coaching-overlay.hide
: event is triggered when the coaching overlay should be hidden.Example - Sky Coaching Overlay with user specified parameters
<a-scene ... xrlayers="cameraDirection: back;" sky-coaching-overlay=" animationColor: #E86FFF; promptText: Look at the sky!!;" ... renderer="colorManagement: true" >
// Configured here
SkyCoachingOverlay.configure({
animationColor: '#E86FFF',
promptText: 'Look at the sky!!',
})
XR8.addCameraPipelineModules([ // Add camera pipeline modules.
// Existing pipeline modules.
XR8.GlTextureRenderer.pipelineModule(), // Draws the camera feed.
XR8.Threejs.pipelineModule(), // Creates a ThreeJS AR Scene as well as a Sky scene.
window.LandingPage.pipelineModule(), // Detects unsupported browsers and gives hints.
XRExtras.FullWindowCanvas.pipelineModule(), // Modifies the canvas to fill the window.
XRExtras.Loading.pipelineModule(), // Manages the loading screen on startup.
XRExtras.RuntimeError.pipelineModule(), // Shows an error image on runtime error.
// Enables Sky Segmentation.
XR8.LayersController.pipelineModule(),
SkyCoachingOverlay.pipelineModule(),
...
mySkySampleScenePipelineModule(),
])
XR8.LayersController.configure({layers: {sky: {invertLayerMask: false}}})
XR8.Threejs.configure({layerScenes: ['sky']})
this.el.sceneEl.addEventListener('sky-coaching-overlay.show', () => {
// Do something
})
this.el.sceneEl.addEventListener('sky-coaching-overlay.hide', () => {
// Do something
})
const myShow = () => {
console.log('EXAMPLE: SKY COACHING OVERLAY - SHOW')
}
const myHide = () => {
console.log('EXAMPLE: SKY COACHING OVERLAY - HIDE')
}
const myPipelineModule = {
name: 'my-sky-coaching-overlay',
listeners: [
{event: 'sky-coaching-overlay.show', process: myShow},
{event: 'sky-coaching-overlay.hide', process: myHide},
],
}
const onxrloaded = () => {
XR8.addCameraPipelineModule(myPipelineModule)
}
window.XR8 ? onxrloaded() : window.addEventListener('xrloaded', onxrloaded)
8th Wall's XRExtras library provides modules that handle the most common WebAR application needs, including the load screen, social link-out flows and error handling.
The Loading
module displays a loading overlay and camera permissions prompt while libraries are loading, and while the camera is starting up. It's the first thing your users see when they enter your WebAR experience.
This section describes how to customize the loading screen by providing values that change the color, load spinner, and load animation to match the overall design of your experience.
ID's / Classes to override
Loading Screen | iOS (13+) Motion Sensor Prompt |
---|---|
![]() |
![]() |
|
To customize the text, you can use a MutationObserver. Please refer to code example below. |
A-Frame component parameters
If you are using XRExtras with an A-Frame project, the xrextras-loading
module makes it easy to customize the load screen via the following parameters:
Parameter | Type | Description |
---|---|---|
cameraBackgroundColor | Hex Color | Background color of the loading screen's top section behind the camera icon and text (See above. Loading Screen #1) |
loadBackgroundColor | Hex Color | Background color of the loading screen's lower section behind the loadImage (See above. Loading Screen #3) |
loadImage | ID | The ID of an image. The image needs to be an <a-asset> (See above. Loading Screen #4) |
loadAnimation | String | Animation style of loadImage . Choose from spin (default), pulse , scale , or none |
<a-scene tap-place xrextras-almost-there xrextras-loading=" loadBackgroundColor: #007AFF; cameraBackgroundColor: #5AC8FA; loadImage: #myCustomImage; loadAnimation: pulse" xrextras-runtime-error xrweb> <a-assets> <img id="myCustomImage" src="assets/my-custom-image.png"> </a-assets>
const load = () => {
XRExtras.Loading.showLoading()
console.log('customizing loading spinner')
const loadImage = document.getElementById("loadImage")
if (loadImage) {
loadImage.src="img/my-custom-image.png"
}
}
window.XRExtras ? load() : window.addEventListener('xrextrasloaded', load)
#requestingCameraPermissions { color: black; background-color: white; } #requestingCameraIcon { /* This changes the image from white to black */ filter: invert(1); } .prompt-box-8w { background-color: white; color: #00FF00; } .prompt-button-8w { background-color: #0000FF; } .button-primary-8w { background-color: #7611B7; }
let inDom = false
const observer = new MutationObserver(() => {
if (document.querySelector('.prompt-box-8w')) {
if (!inDom) {
document.querySelector('.prompt-box-8w p').innerHTML = '<strong>My new text goes here</strong><br/><br/>Press Approve to continue.'
document.querySelector('.prompt-button-8w').innerHTML = 'Deny'
document.querySelector('.button-primary-8w').innerHTML = 'Approve'
}
inDom = true
} else if (inDom) {
inDom = false
observer.disconnect()
}
})
observer.observe(document.body, {childList: true})
8th Wall's XRExtras library provides modules that handle the most common WebAR application needs, including the load screen, social link-out flows and error handling.
The XRExtras MediaRecorder module makes it easy to customize the Video Recording user experience in your project.
This section describes how to customize recorded videos with things like capture button behavior (tap vs hold), add video watermarks, max video length, end card behavior and looks, etc.
A-Frame primitives
xrextras-capture-button
: Adds a capture button to the scene.
Parameter | Type | Default | Description |
---|---|---|---|
capture-mode | string | "standard" | Sets the capture mode behavior. standard: tap to take photo, tap + hold to record video. fixed: tap to toggle video recording. photo: tap to take photo. One of [standard, fixed, photo] |
xrextras-capture-config
: Configures the captured media.
Parameter | Type | Default | Description |
---|---|---|---|
max-duration-ms | int | 15000 | Total video duration (in miliseconds) that the capture button allows. If the end card is disabled, this corresponds to max user record time. 15000 by default. |
max-dimension | int | 1280 | Maximum dimension (width or height) of captured video. For photo configuration, please see XR8.CanvasScreenthot.configure() |
enable-end-card | bool | true | Whether the end card is included in the recorded media. |
cover-image-url | string | Image source for end card cover image. Uses project's cover image by default. | |
end-card-call-to-action | string | "Try it at: " | Sets the text string for call to action on end card. |
short-link | string | Sets the text string for end card shortlink. Uses project shortlink by default. | |
footer-image-url | string | Powered by 8th Wall image | Image source for end card footer image. |
watermark-image-url | string | null | Image source for watermark. |
watermark-max-width | int | 20 | Max width (%) of watermark image. |
watermark-max-height | int | 20 | Max height (%) of watermark image. |
watermark-location | string | "bottomRight" | Location of watermark image. One of topLeft, topMiddle, topRight, bottomLeft, bottomMiddle, bottomRight |
file-name-prefix | string | "my-capture-" | Sets the text string that prepends the unique timestamp on file name. |
request-mic | string | "auto" | Determines if you want to set up the microphone during initialization ("auto") or during runtime ("manual") |
include-scene-audio | bool | true | If true, the A-Frame sounds in the scene will be part of the recorded output. |
xrextras-capture-preview
: Adds a media preview prefab to the scene which allows for playback, downloading, and sharing.
Parameter | Type | Default | Description |
---|---|---|---|
action-button-share-text | string | "Share" | Sets the text string in action button when Web Share API 2 is available (Android, iOS 15 or higher). “Share” by default. |
action-button-view-text | string | "View" | Sets the text string in action button when Web Share API 2 is not available in iOS (iOS 14 or below). “View” by default. |
XRExtras.MediaRecorder Events
XRExtras.MediaRecorder emits the following events.
Events Emitted
Event Emitted | Description |
---|---|
mediarecorder-photocomplete | Emitted after a photo is taken. |
mediarecorder-recordcomplete | Emitted after a video recording is complete. |
mediarecorder-previewready | Emitted after a previewable video recording is complete. (Android/Desktop only) |
mediarecorder-finalizeprogress | Emitted when the media recorder is making progress in the final export. (Android/Desktop only) |
mediarecorder-previewopened | Emitted after recording preview is opened. |
mediarecorder-previewclosed | Emitted after recording preview is closed. |
<xrextras-capture-button capture-mode="standard"></xrextras-capture-button> <xrextras-capture-config max-duration-ms="15000" max-dimension="1280" enable-end-card="true" cover-image-url="" end-card-call-to-action="Try it at:" short-link="" footer-image-url="//cdn.8thwall.com/web/img/almostthere/v2/poweredby-horiz-white-2.svg" watermark-image-url="//cdn.8thwall.com/web/img/mediarecorder/8logo.png" watermark-max-width="100" watermark-max-height="10" watermark-location="bottomRight" file-name-prefix="my-capture-" ></xrextras-capture-config> <xrextras-capture-preview action-button-share-text="Share" action-button-view-text="View" finalize-text="Exporting..." ></xrextras-capture-preview>
#actionButton { /* change color of action button */ background-color: #007aff !important; }
Modules enable you to add reusable components to your project, allowing you to focus on the development of your core experience. The 8th Wall Cloud Editor allows you to import modules published by 8th Wall directly into your projects.
To import a module into your WebAR project:
Once you have added a module your project you may have to make changes to your code to fully integrate the module. Modules contain documentation that should be referenced to understand how to integrate the specific module into your project code
8th Wall Payments gives developers the tools they need to add secure payments to their AR and VR web apps. Developers can use the Payments Module found in the Cloud Editor to easily add products for purchase to their project. All payments are facilitated by the 8th Wall Payments API which enables developers to collect and receive payments.
Why use 8th Wall Payments?
Easily monetize your WebAR or WebVR experiences with 8th Wall Payments with the Payments Module. Powered by Stripe, 8th Wall Payments provide a secure way for end users to pay for your product and for you to make money developing WebXR projects.
With the 8th Wall Payments module, you have a one-step import that allows you the opportunity to monetize your web AR or web VR project using extensible payment options. With the Payments Module, you can easily customize the payment options such as cost, and item you are selling, all leveraging our streamlined checkout flow optimized for use on mobile, desktop and VR. Access all current and future payment types in one module. Test the success of your payment integration with built in test mode.
Current Payment options available:
Payment Processing
To provide this payment service, 8th Wall takes a small commission of each fee, which is split with our Stripe processor. End users must agree to 8th Wall’s Terms of Service in order to make a purchase.
Payment Processing Fee:
20% of each transaction
8th Wall Payments is currently only accessible in the following countries and their respective currencies:
Admin
or Owner
of your 8th Wall workspace in order to sign up for the 8th Wall Payment API.Sign up for Payments API on your Accounts Page. Once your account receives funds, you will receive payments on the 15th of every month. Amounts that have not been paid out will show up as Pending Amounts on your Accounts page.
All payments are non-refundable. If an end user has a question about their payment they can contact support.
8th Wall Payments leverages Stripe Connect for secure payment processing. In order to start building web apps with paid content, you must sign-up for a Stripe Connect account through 8th Wall. This is required to take advantage of 8th Wall Payments in order to get paid out.
Sign Up for Payments API on your Accounts Page
You will be directed to Stripe Connect. Follow the prompts to fill in all required fields. You will need to provide:
Details for Individual or Business
After you send in your complete information, it may take several days for Stripe to process and validate your information. You can check back the status of your account on the Accounts screen.
Once confirmed, you will see your bank account information on your Accounts page
Manage Payments API Stripe Connect Account
You can view your payment details for money earned across all of your workspace web apps on the Accounts page under the Payments API overview section.
Accounts page Payment API Overview
To view your Stripe Connect account click Go to Stripe.
To update your Stripe Account payment information, such as address or bank account information, click on Update Information.
To see individual payments from your web apps, click View History.
Payments Module
Once you have signed up for 8th Wall Payments, you will need to import the Payments Module into your project in order to access the Payments API.
To import the Payments Module:
You are now ready to add paid content into your project!
Configurations
The Payments Module allows you to easily customize what type of payment option you want, the cost, the product, and more. You can also turn on Test Mode so you can ensure your payments work as expected.
Test Mode
Test mode enables you to simulate purchases made on your web app prior to launching publicly. Turning on Test Mode allows you to integrate the Payments API in their apps, without having to make real purchases.
Configurations for Test Mode:
Configuration | Type | Default | Description |
---|---|---|---|
Test Mode Enabled | Boolean | False | If True - You are simulating purchases in your product, payments are not on the server but cached locally If False - Test Mode is off |
Clear Test Purchases on Run | Boolean | False | If True - Test Mode purchases will be deleted so you can retest the purchase experience If False - Test purchase will remain on local storage until cleared. This is useful for testing existing purchase flows. |
Access Pass
This payment type offers users paid access to AR or VR content for a limited period of time. Access passes are well suited to enable paid access to AR/VR events such as a 1-day ticket to a holographic concert or a virtual art exhibit or 7-day access to an AR-enabled scavenger hunt.
In the end user experience, the user will:
Configurations for Access Pass Defaults
Configuration | Type | Default | Description |
---|---|---|---|
Access Duration Days | number | 1 | (Required) The number of days that this purchase is valid for. Minimum duration is 1 day, maximum duration is 7 days. |
Amount | number | 0.99 | (Required) The amount to request for payment for the specified Access Pass. Amounts have a respective minimum and maximum as defined by the Currency. AUD: $0.99 to $99.99 CAD: $0.99 to $99.99 GBP: £0.99 to £99.99 JPY: ¥99 to ¥999 NZD: $0.99 to $99.99 USD: $0.99 to $99.99 |
Access Pass Name | string | N/A | (Required) The name of the product. This will be used in the checkout form to describe to the user what they’re purchasing. |
Currency | string | usd | (Required) The currency to charge the user. Can be 'aud ', 'cad ', 'gbp ', 'jpy ', 'nzd ', or 'usd ' |
Checkout Page Language | string | en-US | (Required) The language that appears to the end user on the secure checkout page. Can be 'en-US ' (English - United States) or 'ja-JP ' (Japanese). |
8th Wall projects provide basic usage analytics, allowing you to see how many "views" you have received in the past 30 days. If you are looking for more detailed and/or historical analytics, we recommend adding 3rd party web analytics to your WebAR experience.
The process for adding analytics to a WebAR experience is the same as adding them to any non-AR website. You are welcome to use any analytics solution you prefer.
In this example, we’ll explain how to add Google Analytics to your 8th Wall project using Google Tag Manager (GTM) - making it easy to collect custom analytics on how users are both viewing and interacting with your WebAR experience.
Using GTM’s web-based user interface, you can define tags and create triggers that cause your tag to fire when certain events occur. In your 8th Wall project, fire events (using a single line of Javascript) at desired places in your code.
You must already have Google Analytics and Google Tag Manager accounts and have a basic understanding of how they work.
For more information, please refer to the following Google documentation:
Google Analytics
Google Tag Manager
import * as googleTagManagerHtml from './gtm.html'
document.body.insertAdjacentHTML('afterbegin', googleTagManagerHtml)
Example:
At a minimum, create a Tag that will fire upon page load so that you can track information about visitors to your Web AR experience.
Create Tag
GTM also provides the ability to fire events when custom actions take place inside the WebAR experience. These events will be particular to your WebAR project, but some examples might be:
In this example, we’ll create a Tag (with Trigger) and add it to the "AFrame: Place Ground" sample project that fires each time a 3D model is spawned.
Create Custom Event Trigger
Create Tag
Next, create a tag that will fire when the "placeModel" trigger is fired in your code.
IMPORTANT: Make sure to save all triggers/tags created and then Submit/Publish your settings inside the GTM interface so they are live. See https://support.google.com/tagmanager/answer/6107163
Fire Event Inside 8th Wall Project
In your 8th Wall project, add the following line of javascript to fire this trigger at the desired place in your code:
window.dataLayer.push({event: 'placeModel'})
export const tapPlaceComponent = {
init: function() {
const ground = document.getElementById('ground')
ground.addEventListener('click', event => {
// Create new entity for the new object
const newElement = document.createElement('a-entity')
// The raycaster gives a location of the touch in the scene
const touchPoint = event.detail.intersection.point
newElement.setAttribute('position', touchPoint)
const randomYRotation = Math.random() * 360
newElement.setAttribute('rotation', '0 ' + randomYRotation + ' 0')
newElement.setAttribute('visible', 'false')
newElement.setAttribute('scale', '0.0001 0.0001 0.0001')
newElement.setAttribute('shadow', {
receive: false,
})
newElement.setAttribute('class', 'cantap')
newElement.setAttribute('hold-drag', '')
newElement.setAttribute('gltf-model', '#treeModel')
this.el.sceneEl.appendChild(newElement)
newElement.addEventListener('model-loaded', () => {
// Once the model is loaded, we are ready to show it popping in using an animation
newElement.setAttribute('visible', 'true')
newElement.setAttribute('animation', {
property: 'scale',
to: '7 7 7',
easing: 'easeOutElastic',
dur: 800,
})
// **************************************************
// Fire Google Tag Manager event once model is loaded
// **************************************************
window.dataLayer.push({event: 'placeModel'})
})
})
}
}
The Asset bundle feature of 8th Wall's Cloud Editor allows for the use of multi-file assets. These assets typically involve files that reference each other internally using relative paths. ".glTF", ".hcap", ".msdf" and cubemap assets are a few common examples.
In the case of .hcap files, you load the asset via the "main" file, e.g. "my-hologram.hcap". Inside this file are many references to other dependent resources, such as .mp4 and .bin files. These filenames are referenced and loaded by the main file as URLs with paths relative to the .hcap file.
Use one of the following methods to prepare your files before upload:
Option 1:
In the Cloud Editor, click the "+" to the right of ASSETS and select "New asset bundle". Next, select asset type. If you aren't uploading a glTF or HCAP asset, select "Other".
Option 2:
Alternatively, you can drag the assets or ZIP directly into the ASSETS pane at the bottom-right of the Cloud Editor.
After the files have been uploaded, you'll be able to preview the assets before adding it to your project. Select individual files in the left pane to preview them on the right.
If your asset type requires you reference a file, set this file as your "main file". If your asset type requires you reference a folder (cubemaps, etc), set "none" as your "main file".
Note: This step is not required for glTF or HCAP assets. The main file is set automatically for these asset types.
The main file cannot be changed later. If you select the wrong file, you'll have to re-upload the asset bundle.
Give the asset bundle a name. This is the filename by which you'll access the asset bundle within your project.
The upload of your asset bundle will be completed and it will be added to your Cloud Editor project.
Assets can be previewed directly within the Cloud Editor. Select an asset on the left to preview on the right. You can preview a specific asset inside the bundle by expanding the "Show contents" menu on the right and selecting an asset inside.
To rename an asset, click the "down arrow" icon to the right of your asset and choose Rename. Edit the name of the asset and hit Enter to save. Important: if you rename an assset, you'll need to go through your project and make sure all references point to the updated asset name.
To delete an asset, click the "down arrow" icon to the right of your asset and choose Delete.
To reference the asset bundle from an html file in your project (e.g. body.html), simply provide the appropriate path to the src= or gltf-model= parameter.
To reference the asset bundle from javascript, use require()
<!-- Example 1 --> <a-assets> <a-asset-item id="myModel" src="assets/sand-castle.gltf"></a-asset-item> </a-assets> <a-entity id="model" gltf-model="#myModel" class="cantap" scale="3 3 3" shadow="receive: false"> </a-entity> <!-- Example 2 --> <holo-cap id="holo" src="./assets/my-hologram.hcap" holo-scale="6" holo-touch-target="1.65 0.35" xrextras-hold-drag xrextras-two-finger-rotate xrextras-pinch-scale="scale: 6"> </holo-cap>
const modelFile = require('./assets/my-model.gltf')
Debug Mode is an advanced Cloud Editor feature that provides logging, performance information, and enhanced visualizations directly on your device.
Note: Debug mode is currently not displayed when previewing experiences on head mounted devices.
To activate Debug Mode:
If you already have a device connected in the Cloud Editor console you can enable/disable Debug Mode at any time by pressing the “Debug Mode” toggle when you have the device tab selected.
Debug Mode Stats:
Depending on the renderer your project is using Debug Mode will display some of the following information:
Stats Panel (tap to minimize)
<a-asset-items>
(only preloaded 3D models) in <a-assets>
.*Version Panel
Tools Panel
[*] available in Cloud Editor projects using A-Frame
Starting with iOS 9.2, Safari blocked deviceorientation and devicemotion event access from cross-origin iframes.
This prevents 8th Wall Web (if running inside the iframe) from receiving necessary deviceorientation and devicemotion data required for proper tracking if SLAM is enabled. (See Web Browser Requirements. The result is that the orientation of your digital content will appear to be wrong, and the content will "jump" all over the place when you move the phone.
If you have access to the parent window, it's possible to add a script on the parent page that will send custom messages containing deviceorientation and devicemotion data to 8th Wall's AR Engine inside the iframe via JavaScript's postMessage()
method. The postMessage()
method safely enables cross-origin communication between Window objects; e.g., between a page and an iframe embedded within it. (See https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage)
For maximum compatibility with iOS devices, we have created two scripts:
For the OUTER website
iframe.js must be included in the HEAD of the OUTER page via this script tag:
<script src="//cdn.8thwall.com/web/iframe/iframe.js"></script>
When starting AR, register the XRIFrame by iframe ID:
window.XRIFrame.registerXRIFrame(IFRAME_ID)
When stoppping AR, deregister the XRIFrame:
window.XRIFrame.deregisterXRIFrame()
For the INNER website
iframe-inner.js must be included in the HEAD of your INNER AR website with this script tag:
<script src="//cdn.8thwall.com/web/iframe/iframe-inner.js"></script>
By allowing the inner and outer windows to communicate, deviceorientation/devicemotion data can be shared.
See sample project at https://www.8thwall.com/8thwall/inline-ar
<!-- Send deviceorientation/devicemotion to the INNER iframe --> <script src="//cdn.8thwall.com/web/iframe/iframe.js"></script> ... const IFRAME_ID = 'my-iframe' // Iframe containing AR content. const onLoad = () => { window.XRIFrame.registerXRIFrame(IFRAME_ID) } // Add event listenters and callbacks for the body DOM. window.addEventListener('load', onLoad, false) ... <body> <iframe id="my-iframe" style="border: 0; width: 100%; height: 100%" allow="camera;microphone;gyroscope;accelerometer;" src="https://www.other-domain.com/my-web-ar/"> </iframe> </body>
<head> <!-- Recieve deviceorientation/devicemotion from the OUTER window --> <script src="//cdn.8thwall.com/web/iframe/iframe-inner.js"></script> </head> ... <body> <!-- For A-FRAME --> <!-- NOTE: The iframe-inner script must load after A-FRAME, and iframe-inner component must appear before xrweb. --> <a-scene iframe-inner xrweb> ... </a-scene>
<head> <!-- Recieve deviceorientation/devicemotion from the OUTER window --> <script src="//cdn.8thwall.com/web/iframe/iframe-inner.js"></script> </head> ... <!-- For non-AFrame projects, add iframeInnerPipelineModule to the custom pipeline module section, typically located in "onxrloaded" like so: --> XR8.addCameraPipelineModules([ // Custom pipeline modules iframeInnerPipelineModule, ])
Progressive Web Apps (PWAs) use modern web capabilities to offer users an experience that's similar to a native application. The 8th Wall Cloud Editor allows you to create a PWA version of your project so that users can add it to their home screen. Users must be connected to the internet in order to access it.
NOTE: Progressive Web Apps are only availalbe to accounts with a paid plan.
To enable PWA support for your WebAR project:
Note: For Cloud Editor projects, you may be prompted to build & re-publish your project if it was previously published. If you decide not to republish, PWA support will be included the next time your project is built.
8th Wall's XRExtras library provides an API to automatically display an install prompt in your web app.
Please refer to the PwaInstaller
API reference at https://github.com/8thwall/web/tree/master/xrextras/src/pwainstallermodule
Dimensions:
Minimum: 512 x 512 pixels
The PwaInstaller module from XRExtras displays an install prompt asking your user to add your web app to their home screen.
To customize the look of your install prompt, you can provide custom string values through the XRExtras.PwaInstaller.configure() API.
For a completely custom install prompt, configure the installer with displayInstallPrompt and hideInstallPrompt methods.
For Self-Hosted apps, we aren’t able to automatically inject details of the PWA into the HTML, requiring use of the configure API with the name and icon they’d like to appear in the install prompt.
Add the following <meta>
tags to the <head>
of your html:
<meta name="8thwall:pwa_name" content="My PWA Name">
<meta name="8thwall:pwa_icon" content="//cdn.mydomain.com/my_icon.png">
<a-scene xrextras-almost-there xrextras-loading xrextras-runtime-error xrextras-pwa-installer xrweb>
XR8.addCameraPipelineModules([
XR8.GlTextureRenderer.pipelineModule(),
XR8.Threejs.pipelineModule(),
XR8.XrController.pipelineModule(),
XRExtras.AlmostThere.pipelineModule(),
XRExtras.FullWindowCanvas.pipelineModule(),
XRExtras.Loading.pipelineModule(),
XRExtras.RuntimeError.pipelineModule(),
XRExtras.PwaInstaller.pipelineModule(), // Added here
// Custom pipeline modules.
myCustomPipelineModule(),
])
<a-scene xrextras-gesture-detector xrextras-almost-there xrextras-loading xrextras-runtime-error xrextras-pwa-installer="name: My Cool PWA; iconSrc: '//cdn.8thwall.com/my_custom_icon'; installTitle: 'My CustomTitle'; installSubtitle: 'My Custom Subtitle'; installButtonText: 'Custom Install'; iosInstallText: 'Custom iOS Install'" xrweb>
XRExtras.PwaInstaller.configure({
displayConfig: {
name: 'My Custom PWA Name',
iconSrc: '//cdn.8thwall.com/my_custom_icon',
installTitle: ' My Custom Title',
installSubtitle: 'My Custom Subtitle',
installButtonText: 'Custom Install',
iosInstallText: 'Custom iOS Install',
}
})
<a-scene xrweb="disableWorldTracking: true" xrextras-gesture-detector xrextras-almost-there xrextras-loading xrextras-runtime-error xrextras-pwa-installer="minNumVisits: 5; displayAfterDismissalMillis: 86400000;" >
XRExtras.PwaInstaller.configure({
promptConfig: {
minNumVisits: 5, // Users must visit web app 5 times before prompt
displayAfterDismissalMillis: 86400000 // One day
}
})
We recommend using 3D models in GLB (glTF 2.0 binary) format for all WebAR experiences. GLB is currently the best format for WebAR with its small file size, great performance and versatile feature support (PBR, animations, etc).
Before you export, ensure that:
If your model is exported as a glTF, drag and drop the glTF folder into gltf.report and click Export to convert it to a GLB.
If your model can not be exported to glTF/GLB from 3D modeling software, import it in Blender and export as gLTF or use a converter.
Online converters: Creators3D, Boxshot
Native converters: Maya2glTF, 3DS Max
A full list of converters can be found at https://github.com/khronosgroup/gltf#gltf-tools.
It's a good idea to view the model in glTF Viewer before importing it to an 8th Wall project. This will help catch any issues with your model prior to adding it to an 8th Wall project.
After you import into an 8th Wall project, ensure that:
For more information about 3D model best practices, reference the GLB optimization section.
Please also view the 5 Tips for Developers to Make Any 8th Wall WebAR Project More Realistic blog post.
Optimizing assets is a critical step to creating magical WebAR content. Large assets can lead to issues such as infinite loading, black textures, and crashes.
Textures are usually the biggest contributor to large file sizes, it’s a good idea to optimize these first.
For best results, we suggest using textures 1024x1024 or smaller. Texture sizes should always be set to the power of two (512x512, 1024x1024, etc).
This can be done using your favorite image editing and/or 3D modeling program; however, if you already have an existing GLB model, a quick and easy way to resize the textures within the 3D model is to use gltf.report
Compression can greatly reduce file size. Draco compression is the most popular compression method and can be configured in Blender export settings or after exporting in gltf.report.
Loading compressed models to your project requires additional configuration. Reference the A-Frame sample project or the Three.js sample project for more information.
For further optimization, decimate the model to reduce polygon count.
In Blender, apply the Decimate modifier to the model and reduce the Ratio setting to a value under 1.
Select Apply Modifiers in the export settings.
If you are on an paid plan, you gain the ability to self-host WebAR experiences. If you are self-hosting on a webserver that hasn't been whitelisted (see Connected Domains section of the documentation), you will need to authorize your device in order to view.
Authorizing a device installs a Developer Token (cookie) into its web browser, allowing it to view any app key within the current workspace.
There is no limit to the number of devices that can be authorized, but each device needs to be authorized individually. Views of your web application from an authorized device count toward your monthly usage total.
IMPORTANT: If you have followed the steps below on an iOS device, and are still having issues, please see the Troubleshooting section for steps to fix. Safari has a feature called Intelligent Tracking Prevention that can block third party cookies (what we use to authorize your device while you're developing). When they get blocked, we can't verify your device.
How to authorize a device:
Login to 8thwall.com and select a Project.
Click Device Authorization to expand the device authorization pane.
Select 8th Wall Engine version to use during development. To use the latest stable version of 8th Wall, select release. To test against a pre-release version, select beta.
From Desktop: If you are logged into the console on your laptop/desktop, Scan the QR code from the device you wish to authorize. This installs an authorization cookie on the device.
Note: A QR code can only be scanned once. After scanning, you will receive confirmation that your device has been authorized. The console will then generate a new QR code that can be scanned to authorize another device.
Before:
After:
Confirmation (Console) | Confirmation (On Device) |
---|---|
![]() |
![]() |
From Mobile: If you are logged into 8thwall.com directly on the mobile device you wish to authorize, simply click Authorize browser. Doing so installs an authorization cookie into your mobile browser, authorizing it to view any project within the current workspace.
Before:
After:
If you are on a paid plan, you gain the ability to host WebAR projects on your own web servers.
Serving web app locally from your computer can be tricky as browsers require HTTPS certificates to access the camera on your phone through a browser. As a convenience, 8th Wall has created a public GitHub repo (https://github.com/8thwall/web) where you can find a "serve" script that will run a local https webserver on your development computer. You can also download sample 8th Wall Web projects to help you get started with self-hosted configurations.
If you don't already have Node.js and npm installed, get it here: https://www.npmjs.com/get-npm
# cd <directory_where_you_saved_sample_project_files>
# cd serve
# npm install
# cd ..
# ./serve/bin/serve -d <sample_project_location>
Example:
./serve/bin/serve -n -d gettingstarted/xraframe/ -p 7777
IMPORTANT: To connect to this local webserver, make sure to copy the entire "Listening" URL into your browser, including both the "https://" at the beginning and port number at the end.
NOTE: If the serve script states it's listening on 127.0.0.1:<port> (which is the loopback device aka "localhost") your mobile phone won't be able to connect to that IP address directly. Please re-run the serve
script with the -i
flag to specify the network interface the serve script should listen on.
Example - specify network interface:
./serve/bin/serve -d gettingstarted/xraframe/ -p 7777 -i en0
If you have issues connecting to the local webserver running on your computer, please refer to the troubleshooting section
Serving web app locally from your computer can be tricky as browsers require HTTPS certificates to access the camera on your phone through a browser. As a convenience, 8th Wall has created a public GitHub repo (https://github.com/8thwall/web) where you can find a "serve" script that will run a local https webserver on your development computer. You can also download sample 8th Wall Web projects to help you get started.
If you don't already have Node.js and npm installed, get it here: https://www.npmjs.com/get-npm
Note: Run the following command using a standard Command Prompt window (cmd.exe). The script will generate errors if run from PowerShell.
# cd <directory_where_you_saved_sample_project_files>
# cd serve
# npm install
# cd ..
# serve\bin\serve.bat -d <sample_project_location>
Example:
serve\bin\serve.bat -n -d gettingstarted\xraframe -p 7777
IMPORTANT: To connect to this local webserver, make sure to copy the entire "Listening" URL into your browser, including both the "https://" at the beginning and port number at the end.
NOTE: If the serve script states it's listening on 127.0.0.1:<port> (which is the loopback device aka "localhost") your mobile phone won't be able to connect to that IP address directly. Please re-run the serve
script with the -i
flag to specify the network interface the serve script should listen on.
Example - specify network interface:
serve\bin\serve.bat -d gettingstarted\xraframe -p 7777 -i WiFi
If you have issues connecting to the local webserver running on your computer, please refer to the troubleshooting section
IMPORTANT: Make sure to copy the entire "Listening" URL into your browser, including both the "https://" at the beginning and port number at the end.
Example: https://192.168.1.50:8080
IMPORTANT: Make sure to copy the entire "Listening" URL into your browser, including both the "https://" at the beginning and port number at the end.
Example: https://192.168.1.50:8080
This section of the documentation is intended for advanced users who are using the 8th Wall Cloud Editor and need to create a completely customized version of XRExtras. This process involves:
If you only need to make basic customizations of the XRExtras loading screen, please refer to this section instead.
Note: By importing a copy of XRExtras into your Cloud Editor project, you will no longer receive the latest XRExtras updates and functionality available in from CDN. Make sure to always pull the latest version of XRExtras code from GitHub as you start new projects.
Instructions:
myxrextras
folder within your Cloud Editor projectmodule.exports
with export
:Examples:
myxrextras/aframe/aframe.js:
myxrextras/aframe/aframe.js:
Changing/Adding image assets
First, drag & drop new images into assets/ to upload them to your project:
In html files with src
params, refer to the image asset using a relative path:
<img src="../../assets/my-logo.png" id="loadImage" class="spin" />
In javascript files, use a relative path and require()
to reference assets:
img.src = require('../../assets/my-logo.png')
Release 21.2: (2022-December-16, v21.2.2.997 / 2022-December-13, v21.2.1.997)
New Features:
Introducing Sky Effects - a major update to the 8th Wall Engine enabling sky segmentation:
Fixes and Enhancements:
XRExtras Enhancements:
Release 20.3: (2022-November-22, v20.3.3.684)
New Features:
Updated Metaversal Deployment to support mixed reality in the Meta Quest Browser.
Fixes and Enhancements:
Release 20: (2022-October-05, v20.1.20.684 / 2022-September-21, v20.1.19.684 / 2022-September-21, v20.1.17.684)
New Features:
Introducing Lightship VPS for Web - create location-based WebAR experiences by connecting AR content to real-world locations.
Added new Geospatial Browser to the 8th Wall Developer Portal.
enableVps
parameter to XR8.XrController.configure() and xrweb.XR8.Vps.makeWayspotWatcher
, and XR8.Vps.projectWayspots
APIs for querying nearby Wayspots and project Wayspots.Niantic Lightship Map module
Fixes and Enhancements:
Release 19.1: (2022-August-26, v19.1.6.390 / 2022-August-10, v19.1.2.390)
Fixes and Enhancements:
Release 19: (2022-May-5, v19.0.16.390 / 2022-April-13, v19.0.14.390 / 2022-March-24, v19.0.8.390)
New Features:
Introducing Absolute Scale — a major update to 8th Wall SLAM to enable real-world scale in World Effects:
Fixes and Enhancements:
Release 18.2: (2022-March-09, v18.2.4.554 / 2022-January-14, v18.2.3.554 / 2022-January-13, v18.2.2.554)
Fixes and Enhancements:
Release 18.1: (2021-December-02, v18.1.3.554)
Fixes and Enhancements:
Release 18: (2021-November-08, v18.0.6.554)
New Features:
Introducing the completely rebuilt 8th Wall Engine featuring Metaversal Deployment:
Fixes and Enhancements:
XRextras:
Release 17.2: (2021-October-26, v17.2.4.476)
Fixes and Enhancements:
Release 17.1: (2021-September-21, v17.1.3.476)
New Features:
Added new APIs
Fixes and Enhancements:
XRExtras Enhancements:
Release 17: (2021-July-20, v17.0.5.476)
Fixes and Enhancements:
XRExtras Enhancements:
Release 16.1: (2021-June-02, v16.1.4.1227)
Fixes and Enhancements:
Release 16: (2021-May-21, v16.0.8.1227 / 2021-April-27, v16.0.6.1227 / 2021-April-22, v16.0.5.1227)
New Features:
Introducing the all-new 8th Wall MediaRecorder:
Fixes and Enhancements:
XRExtras Enhancements:
Release 15.3: (2021-March-2, v15.3.3.487)
New Features:
Fixes and Enhancements:
Release 15.2: (2020-December-14, v15.2.4.487)
New Features:
Fixes and Enhancements:
XRExtras Enhancements:
Release 15.1: (2020-October-27, v15.1.4.487)
New Features:
Fixes and Enhancements:
XRExtras Enhancements:
Release 15: (2020-October-09, v15.0.9.487 / 2020-September-22, v15.0.8.487)
New Features:
8th Wall Curved Image Targets:
Fixes and Enhancements:
XRExtras Enhancements:
New AFrame components for easy Curved Image Target development:
Release 14.2: (2020-July-30, v14.2.4.949)
New Features:
Updated MediaRecorder.configure() to provide more control over audio output and mixing:
Fixes and Enhancements:
Release 14.1: (2020-July-06, v14.1.4.949)
New Features:
Introducing 8th Wall Video Recording:
Fixes and Enhancements:
XRExtras Enhancements:
Record button prefab component for capturing video and photos:
Use XRExtras to easily customize the Video Recording user experience in your project:
Release 14: (2020-May-26)
New Features:
Fixes and Enhancements:
XRExtras Enhancements:
Release 13.2: (2020-Feb-13)
New Features:
Fixes and Enhancements:
XRExtras Enhancements:
Release 13.1:
New Features:
Fixes and Enhancements:
Release 13:
New Features:
Release 12.1:
Fixes and Enhancements:
Release 12:
New Features:
Fixes:
XRExtras:
Release 11.2:
New Features:
Release 11.1:
Fixes and Enhancements:
Release 11:
New Features:
Release 10.1:
New Features:
Fixes:
Release 10:
Release 10 adds a revamped web developer console with streamlined developer-mode, access to allowed origins and QR codes. It adds 8th Wall Web support for XRExtras, an open-source package for error handling, loading visualizations, "almost there" flows, and more.
New Features:
XR Extras provides a convenient solution for:
Fixes:
Release 9.3:
New Features:
Release 9.2:
New Features:
Release 9.1:
New Features:
Release 9:
This section of the documentation contains details of 8th Wall Web's Javascript API.
Description
Entry point for 8th Wall's Javascript API
Functions
Function | Description |
---|---|
addCameraPipelineModule | Adds a module to the camera pipeline that will receive event callbacks for each stage in the camera pipeline. |
addCameraPipelineModules | Add multiple camera pipeline modules. This is a convenience method that calls addCameraPipelineModule in order on each element of the input array. |
clearCameraPipelineModules | Remove all camera pipeline modules from the camera loop. |
initialize | Returns a promise that is fulfilled when the AR Engine's WebAssembly is initialized. |
isInitialized | Indicates whether or not the AR Engine's WebAssembly is initialized. |
isPaused | Indicates whether or not the XR session is paused. |
pause | Pause the current XR session. While paused, the camera feed is stopped and device motion is not tracked. |
resume | Resume the current XR session. |
removeCameraPipelineModule | Removes a module from the camera pipeline. |
removeCameraPipelineModules | Remove multiple camera pipeline modules. This is a convenience method that calls removeCameraPipelineModule in order on each element of the input array. |
requiredPermissions | Return a list of permissions required by the application. |
run | Open the camera and start running the camera run loop. |
runPreRender | Executes all lifecycle updates that should happen before rendering. |
runPostRender | Executes all lifecycle updates that should happen after rendering. |
stop | Stop the current XR session. While stopped, the camera feed is closed and device motion is not tracked. |
version | Get the 8th Wall Web engine version. |
Events
Event Emitted | Description |
---|---|
xrloaded | This event is emitted once XR8 has loaded. |
Modules
Module | Description |
---|---|
AFrame | Entry point for A-Frame integration with 8th Wall Web. |
Babylonjs | Entry point for Babylon.js integration with 8th Wall Web. |
CameraPixelArray | Provides a camera pipeline module that gives access to camera data as a grayscale or color uint8 array. |
CanvasScreenshot | Provides a camera pipeline module that can generate screenshots of the current scene. |
FaceController | Provides face detection and meshing, and interfaces for configuring tracking. |
GlTextureRenderer | Provides a camera pipeline module that draws the camera feed to a canvas as well as extra utilities for GL drawing operations. |
LayersController | Provides a camera pipeline module that enables semantic layer detection and interfaces for configuring layer rendering. |
MediaRecorder | Provides a camera pipeline module that allows you to record a video in MP4 format. |
PlayCanvas | Entry point for PlayCanvas integration with 8th Wall Web. |
Threejs | Provides a camera pipeline module that drives three.js camera to do virtual overlays. |
Vps | Utilities to talk to Vps services. |
XrConfig | Specifying class of devices and cameras that pipeline modules should run on. |
XrController | XrController provides 6DoF camera tracking and interfaces for configuring tracking. |
XrDevice | Provides information about device compatibility and characteristics. |
XrPermissions | Utilities for specifying permissions required by a pipeline module. |
XR8.addCameraPipelineModule()
Description
8th Wall camera applications are built using a camera pipeline module framework. For a full description on camera pipeline modules, see CameraPipelineModule.
Applications install modules which then control the behavior of the application at runtime. A module object must have a .name string which is unique within the application, and then should provide one or more of the camera lifecycle methods which will be executed at the appropriate point in the run loop.
During the main runtime of an application, each camera frame goes through the following cycle:
onBeforeRun -> onCameraStatusChange (requesting -> hasStream -> hasVideo | failed) -> onStart -> onAttach -> onProcessGpu -> onProcessCpu -> onUpdate -> onRender
Camera modules should implement one or more of the following camera lifecycle methods:
Function | Description |
---|---|
onAppResourcesLoaded | Called when we have received the resources attached to an app from the server. |
onAttach | Called before the first time a module receives frame updates. It is called on modules that were added either before or after the pipeline is running. |
onBeforeRun | Called immediately after XR8.run(). If any promises are returned, XR will wait on all promises before continuing. |
onCameraStatusChange | Called when a change occurs during the camera permissions request. |
onCanvasSizeChange | Called when the canvas changes size. |
onDetach | is called after the last time a module receives frame updates. This is either after the engine is stopped or the module is manually removed from the pipeline, whichever comes first. |
onDeviceOrientationChange | Called when the device changes landscape/portrait orientation. |
onException | Called when an error occurs in XR. Called with the error object. |
onPaused | Called when XR8.pause() is called. |
onProcessCpu | Called to read results of GPU processing and return usable data. |
onProcessGpu | Called to start GPU processing. |
onRemove | is called when a module is removed from the pipeline. |
onRender | Called after onUpdate. This is the time for the rendering engine to issue any WebGL drawing commands. If an application is providing its own run loop and is relying on XR8.runPreRender() and XR8.runPostRender(), this method is not called and all rendering must be coordinated by the external run loop. |
onResume | Called when XR8.resume() is called. |
onStart | Called when XR starts. First callback after XR8.run() is called. |
onUpdate | Called to update the scene before render. Data returned by modules in onProcessGpu and onProcessCpu will be present as processGpu.modulename and processCpu.modulename where the name is given by module.name = "modulename". |
onVideoSizeChange | Called when the canvas changes size. |
requiredPermissions | Modules can indicate what browser capabilities they require that may need permissions requests. These can be used by the framework to request appropriate permissions if absent, or to create components that request the appropriate permissions before running XR. |
Note: Camera modules that implement onProcessGpu or onProcessCpu can provide data to subsequent stages of the pipeline. This is done by the module's name.
XR8.addCameraPipelineModule({
name: 'camerastartupmodule',
onCameraStatusChange: ({status}) {
if (status == 'requesting') {
myApplication.showCameraPermissionsPrompt()
} else if (status == 'hasStream') {
myApplication.dismissCameraPermissionsPrompt()
} else if (status == 'hasVideo') {
myApplication.startMainApplictation()
} else if (status == 'failed') {
myApplication.promptUserToChangeBrowserSettings()
}
},
})
// Install a module which gets the camera feed as a UInt8Array.
XR8.addCameraPipelineModule(
XR8.CameraPixelArray.pipelineModule({luminance: true, width: 240, height: 320}))
// Install a module that draws the camera feed to the canvas.
XR8.addCameraPipelineModule(XR8.GlTextureRenderer.pipelineModule())
// Create our custom application logic for scanning and displaying QR codes.
XR8.addCameraPipelineModule({
name: 'qrscan',
onProcessCpu: ({processGpuResult}) => {
// CameraPixelArray.pipelineModule() returned these in onProcessGpu.
const { pixels, rows, cols, rowBytes } = processGpuResult.camerapixelarray
const { wasFound, url, corners } = findQrCode(pixels, rows, cols, rowBytes)
return { wasFound, url, corners }
},
onUpdate: ({processCpuResult}) => {
// These were returned by this module ('qrscan') in onProcessCpu
const {wasFound, url, corners } = processCpuResult.qrscan
if (wasFound) {
showUrlAndCorners(url, corners)
}
},
})
XR8.addCameraPipelineModules([ modules ])
Description
Add multiple camera pipeline modules. This is a convenience method that calls addCameraPipelineModule in order on each element of the input array.
Parameters
Parameter | Description |
---|---|
modules | An array of camera pipeline modules. |
const onxrloaded = () => {
XR8.addCameraPipelineModules([ // Add camera pipeline modules.
// Existing pipeline modules.
XR8.GlTextureRenderer.pipelineModule(), // Draws the camera feed.
])
// Request camera permissions and run the camera.
XR8.run({canvas: document.getElementById('camerafeed')})
}
// Wait until the XR javascript has loaded before making XR calls.
window.XR8 ? onxrloaded() : window.addEventListener('xrloaded', onxrloaded)
XR8.clearCameraPipelineModules()
Description
Remove all camera pipeline modules from the camera loop.
Parameters
None
XR8.clearCameraPipelineModules()
XR8.initialize()
Parameters
None
Description
Returns a promise that is fulfilled when the AR Engine's WebAssembly is initialized.
XR8.initialize().then(() => console.log(XR8.version())
bool XR8.isInitialized()
Parameters
None
Description
Indicates whether or not the AR Engine's WebAssembly is initialized.
if (XR8.isInitialized()) {
console.log(XR8.version())
}
bool XR8.isPaused()
Parameters
None
Description
Indicates whether or not the XR session is paused.
// Call XR8.pause() / XR8.resume() when the button is pressed.
document.getElementById('pause').addEventListener(
'click',
() => {
if (!XR8.isPaused()) {
XR8.pause()
} else {
XR8.resume()
}
},
true)
XR8.pause()
Parameters
None
Description
Pause the current XR session. While paused, device motion is not tracked.
// Call XR8.pause() / XR8.resume() when the button is pressed.
document.getElementById('pause').addEventListener(
'click',
() => {
if (!XR8.isPaused()) {
XR8.pause()
} else {
XR8.resume()
}
},
true)
XR8.removeCameraPipelineModule(moduleName)
Description
Removes a module from the camera pipeline.
Parameters
Parameter | Description |
---|---|
moduleName | The string name string of a module. |
XR8.removeCameraPipelineModule('reality')
XR8.removeCameraPipelineModules([ moduleNames ])
Description
Remove multiple camera pipeline modules. This is a convenience method that calls removeCameraPipelineModule in order on each element of the input array.
Parameters
Parameter | Description |
---|---|
moduleNames | An array of objects with a name property, or a name strings of modules. |
XR8.removeCameraPipelineModules(['threejsrenderer', 'reality'])
XR8.requiredPermissions()
Parameters
None
Description
Return a list of permissions required by the application.
if (XR8.XrPermissions) {
const permissions = XR8.XrPermissions.permissions()
const requiredPermissions = XR8.requiredPermissions()
if (!requiredPermissions.has(permissions.DEVICE_ORIENTATION)) {
return
}
}
XR8.resume()
Parameters
None
Description
Resume the current XR session after it has been paused.
// Call XR8.pause() / XR8.resume() when the button is pressed.
document.getElementById('pause').addEventListener(
'click',
() => {
if (!XR8.isPaused()) {
XR8.pause()
} else {
XR8.resume()
}
},
true)
XR8.run(canvas, webgl2, ownRunLoop, cameraConfig, glContextConfig, allowedDevices, sessionConfiguration)
Parameters
Property | Type | Default | Description |
---|---|---|---|
canvas | HTMLCanvasElement | The HTML Canvas that the camera feed will be drawn to. | |
webgl2 [Optional] | bool | true | If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1. |
ownRunLoop [Optional] | bool | true | If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only] |
cameraConfig: {direction} [Optional] | object | {direction: XR8.XrConfig.camera().BACK} |
Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT |
glContextConfig [Optional] | WebGLContextAttributes | null | The attributes to configure the WebGL canvas context. |
allowedDevices [Optional] | XR8.XrConfig.device() | XR8.XrConfig.device().MOBILE_AND_HEADSETS |
Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY , always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE_AND_HEADSETS or XR8.XrConfig.device().MOBILE . |
sessionConfiguration: {disableXrTablet, xrTabletStartsMinimized, defaultEnvironment} [Optional] |
Object | {} | Configure options related to varying types of sessions. |
sessionConfiguration
is an object with the following [Optional] properties:
Property | Type | Default | Description |
---|---|---|---|
disableXrTablet [Optional] | bool | false | Disable the tablet visible in immersive sessions. |
xrTabletStartsMinimized [Optional] | bool | false | The tablet will start minimized. |
defaultEnvironment {disabled, floorScale, floorTexture, floorColor, fogIntensity, skyTopColor, skyBottomColor, skyGradientStrength} [Optional] |
Object | {} | Configure options related to the default environment of your immersive session. |
defaultEnvironment
is an object with the following [Optional] properties:
Property | Type | Default | Description |
---|---|---|---|
disabled [Optional] | bool | false | Disable the default "void space" background. |
floorScale [Optional] | Number | 1 | Shrink or grow the floor texture. |
floorTexture [Optional] | Asset | Specify an alternative texture asset or URL for the tiled floor. | |
floorColor [Optional] | Hex Color | #1A1C2A |
Set the floor color. |
fogIntensity [Optional] | Number | 1 | Increase or decrease fog density. |
skyTopColor [Optional] | Hex Color | #BDC0D6 |
Set the color of the sky directly above the user. |
skyBottomColor [Optional] | Hex Color | #1A1C2A |
Set the color of the sky at the horizon. |
skyGradientStrength [Optional] | Number | 1 | Control how sharply the sky gradient transitions. |
Notes:
cameraConfig
: World tracking (SLAM) is only supported on the back
camera. If you are using the front
camera, you must disable world tracking by calling XR8.XrController.configure({disableWorldTracking: true})
first.Description
Open the camera and start running the camera run loop.
// Open the camera and start running the camera run loop
// In index.html: <canvas id="camerafeed"></canvas>
XR8.run({canvas: document.getElementById('camerafeed')})
// Disable world tracking (SLAM). This is required to use the front camera.
XR8.XrController.configure({disableWorldTracking: true})
// Open the camera and start running the camera run loop
// In index.html: <canvas id="camerafeed"></canvas>
XR8.run({canvas: document.getElementById('camerafeed'), cameraConfig: {direction: XR8.XrConfig.camera().FRONT}})
// Open the camera and start running the camera run loop with an opaque canvas.
// In index.html: <canvas id="camerafeed"></canvas>
XR8.run({canvas: document.getElementById('camerafeed'), glContextConfig: {alpha: false, preserveDrawingBuffer: false}})
XR8.runPreRender( timestamp )
Description
Executes all lifecycle updates that should happen before rendering.
IMPORTANT: Make sure that onStart has been called before calling runPreRender()/runPostRender().
Parameters
Parameter | Description |
---|---|
timestamp | The current time, in milliseconds. |
// Implement A-Frame components tick() method
function tick() {
// Check device compatibility and run any necessary view geometry updates and draw the camera feed.
...
// Run XR lifecycle methods
XR8.runPreRender(Date.now())
}
XR8.runPostRender()
Description
Executes all lifecycle updates that should happen after rendering.
IMPORTANT: Make sure that onStart has been called before calling runPreRender()/runPostRender().
Parameters
None
// Implement A-Frame components tock() method
function tock() {
// Check whether XR is initialized
...
// Run XR lifecycle methods
XR8.runPostRender()
}
XR8.stop()
Parameters
None
Description
While stopped, the camera feed is closed and device motion is not tracked. Must call XR8.run() to restart after the engine is stopped.
XR8.stop()
string XR8.version()
Parameters
None
Description
Get the 8th Wall Web engine version.
console.log(XR8.version())
Description
Provides a module for monetizing your Web AR and Web VR experience. This is only available for 8th Wall Hosted projects, and requires the Payments Module.
import {AccessPass} from 'payments'
Functions
Function | Description |
---|---|
requestPurchaseIfNeeded | Opens a checkout window where the customer can securely make a payment for the provided access pass. |
AccessPass.requestPurchaseIfNeeded({ amount, name, productId, statementDescriptor, accessDurationDays, currency, language })
Description
Opens a checkout window where the customer can securely make a payment for the provided access pass.
If a valid access pass has already been purchased in the past, the returned Promise will resolve immediately with information on the previous purchase.
Any parameters provided via this API will supersede any parameters provided in the Module Configuration.
Parameters
Parameter | Type | Description | |
---|---|---|---|
amount | number | The amount to request for payment for the specified access pass. Amounts have a respective minimum and maximum as defined by the currency .AUD: $0.99 to $99.99 CAD: $0.99 to $99.99 GBP: £0.99 to £99.99 JPY: ¥99 to ¥999 NZD: $0.99 to $99.99 USD: $0.99 to $99.99 |
|
name | string | The name of the product. This is displayed to users on the checkout screen. Maximum of 30 characters. | |
productId | string | A unique identifier for this access pass. Maximum of 30 characters. | |
statementDescriptor | string | The descriptor that appears on the customer’s credit card statement. Maximum of 22 characters. | |
accessDurationDays | number | The number of days a customer is allowed access for. Minimum of 1 and maximum of 7. | |
currency | string | The currency to charge the user. Can be 'aud ', 'cad ', 'gbp ', 'jpy ', 'nzd ', or 'usd '. |
|
language | string | The language that appears to the end user on the secure checkout page. Can be 'en-US ' (English - United States) or 'ja-JP ' (Japanese). |
Returns
A Promise which will resolve if the customer has completed the purchase successfully. The result includes information about the purchase that was made:
{
productId: '1-day-access-pass',
timestamp: 1653413347810,
expirationTimestamp: 1653499747810,
}
Throws
An error is thrown if the customer does not complete the purchase successfully.
AccessPass.requestPurchaseIfNeeded({
amount: 9.99,
name: '1-Day Access Pass',
productId: '1-day-access-pass',
statementDescriptor: '1DAY ACCESS PASS',
accessDurationDays: 1,
currency: 'usd',
language: 'en-US',
})
A-Frame (https://aframe.io) is a web framework designed for building virtual reality experiences. By adding 8th Wall Web to your A-Frame project, you can now easily build augmented reality experiences for the web.
Adding 8th Wall Web to A-Frame
Cloud Editor
<meta name="8thwall:renderer" content="aframe:1.3.0">
Self Hosted
8th Wall Web can be added to your A-Frame project in a few easy steps:
<script src="//cdn.8thwall.com/web/aframe/8frame-1.3.0.min.js"></script>
<script src="//apps.8thwall.com/xrweb?appKey=XXXXX"></script>
World Tracking and/or Image Targets
xrweb
component to your a-scene tag:<a-scene xrweb>
xrweb Attributes (all optional)
Component | Type | Default | Description |
---|---|---|---|
scale | String | "responsive" | Either responsive or absolute . responsive will return values so that the camera on frame 1 is at the origin defined via XR8.XrController.updateCameraProjectionMatrix(). absolute will return the camera, image targets, etc in meters. The default is responsive . When using absolute the x-position, z-position, and rotation of the starting pose will respect the parameters set in XR8.XrController.updateCameraProjectionMatrix() once scale has been estimated. The y-position will depend on the camera's physical height from the ground plane. |
disableWorldTracking | bool | false | If true, turn off SLAM tracking for efficiency. |
enableVps | bool | false | If true, look for Project Wayspots and a mesh. The mesh that is returned has no relation to Project Wayspots and will be returned even if no Project Wayspots are configured. Enabling VPS overrides settings for scale and disableWorldTracking . |
cameraDirection | string | back | Desired camera to use. Choose from: back or front . Use cameraDirection: front; with mirroredDisplay: true; for selfie mode. Note that world tracking is only supported with cameraDirection: back; .` |
allowedDevices | string | "mobile-and-headsets" | Supported device classes. Choose from: 'mobile-and-headsets' , 'mobile' or 'any' . Use 'any' to enable laptop or desktop-type devices with built-in or attached webcams. Note that world tracking is only supported on 'mobile-and-headsets' or mobile . |
mirroredDisplay | bool | false | If true, flip left and right in the output geometry and reverse the direction of the camera feed. Use 'mirroredDisplay: true;' with 'cameraDirection: front;' for selfie mode. Should not be enabled if World Tracking (SLAM) is enabled. |
disableXrTablet | bool | false | Disable the tablet visible in immersive sessions. |
xrTabletStartsMinimized | bool | false | The tablet will start minimized. |
disableDefaultEnvironment | bool | false | Disable the default "void space" background. |
disableDesktopCameraControls | bool | false | Disable WASD and mouse look for camera. |
disableDesktopTouchEmulation | bool | false | Disable desktop fake touches. |
disableXrTouchEmulation | bool | false | Don’t emit touch events based on controller raycasts with the scene. |
disableCameraReparenting | bool | false | Disable camera -> controller object move |
defaultEnvironmentFloorScale | Number | 1 | Shrink or grow the floor texture. |
defaultEnvironmentFloorTexture | Asset | Specify an alternative texture asset or URL for the tiled floor. | |
defaultEnvironmentFloorColor | Hex Color | #1A1C2A |
Set the floor color. |
defaultEnvironmentFogIntensity | Number | 1 | Increase or decrease fog density. |
defaultEnvironmentSkyTopColor | Hex Color | #BDC0D6 |
Set the color of the sky directly above the user. |
defaultEnvironmentSkyBottomColor | Hex Color | #1A1C2A |
Set the color of the sky at the horizon. |
defaultEnvironmentSkyGradientStrength | Number | 1 | Control how sharply the sky gradient transitions. |
Notes:
cameraDirection
: World tracking (SLAM) is only supported on the back
camera. If you are using the front
camera, you must disable world tracking by setting disableWorldTracking: true
.xrweb
and xrface
cannot be used at the same time.Face Effects
xrface
component to your a-scene tag:<a-scene xrface>
xrface Attributes
Component | Type | Default | Description |
---|---|---|---|
cameraDirection | string | back | Desired camera to use. Choose from: back or front . Use cameraDirection: front; with mirroredDisplay: true; for selfie mode. |
allowedDevices | string | "mobile" | Supported device classes. Choose from: 'mobile' or 'any' . Use 'any' to enable laptop or desktop-type devices with built-in or attached webcams. |
mirroredDisplay | bool | false | If true, flip left and right in the output geometry and reverse the direction of the camera feed. Use 'mirroredDisplay: true;' with 'cameraDirection: front;' for selfie mode. |
meshGeometry | array | ['face'] | Configure which portions of the face mesh will have returned triangle indices. Can be any combination of 'face' , 'eyes' and/or 'mouth' . |
Notes:
xrweb
and xrface
cannot be used at the same time.Functions
Function | Description |
---|---|
xrwebComponent | Creates an A-Frame component for World Tracking and/or Image Target tracking which can be registered with AFRAME.registerComponent() . Generally won't need to be called directly. |
xrfaceComponent | Creates an A-Frame component for Face Effects tracking which can be registered with AFRAME.registerComponent() . Generally won't need to be called directly. |
<a-scene xrweb>
<a-scene xrweb="disableWorldTracking: true">
<a-scene xrweb="enableVps: true">
<a-scene xrweb="disableWorldTracking: true; cameraDirection: front">
XR8.AFrame.xrwebComponent()
Parameters
None
Description
Creates an A-Frame component which can be registered with AFRAME.registerComponent()
. This, however, generally won't need to be called directly. On 8th Wall Web script load, this component will be registered automatically if it is detected that A-Frame has loaded (i.e if window.AFRAME exists).
window.AFRAME.registerComponent('xrweb', XR8.AFrame.xrwebComponent())
This section describes the events emitted by the "xrweb" or "xrface" A-Frame component.
You can listen for these events in your web application to call a function that handles the event.
Events Emitted
The following events are emitted by both "xrweb" and "xrface":
Event Emitted | Description |
---|---|
camerastatuschange | This event is emitted when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status. |
realityerror | This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed. |
realityready | This event is emitted when 8th Wall Web has initialized and at least one frame has been successfully processed. This is the recommended time at which any loading elements should be hidden. |
screenshoterror | This event is emitted in response to the screenshotrequest resulting in an error. |
screenshotready | This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image of the AFrame canvas will be provided. |
Events Emitted by xrweb
Event Emitted | Description |
---|---|
xrimageloading | This event is emitted when detection image loading begins. |
xrimagescanning | This event is emitted when all detection images have been loaded and scanning has begun. |
xrimagefound | This event is emitted when an image target is first found. |
xrimageupdated | This event is emitted when an image target changes position, rotation or scale. |
xrimagelost | This event is emitted when an image target is no longer being tracked. |
xrmeshfound | This event is emitted when a mesh is first found either after start or after a recenter(). |
xrmeshupdated | This event is emitted when the first mesh found changes position or rotation. |
xrmeshlost | This event is emitted when recenter() is called. |
xrprojectwayspotscanning | This event is emitted when all Project Wayspots have been loaded for scanning. |
xrprojectwayspotfound | This event is emitted when a Project Wayspot is first found. |
xrprojectwayspotupdated | This event is emitted when a Project Wayspot changes position or rotation. |
xrprojectwayspotlost | This event is emitted when a Project Wayspot is no longer being tracked. |
xrtrackingstatus | This event is emitted when the XrController starts and any time tracking status or reason changes. |
Events Emitted by xrface
Event Emitted | Description |
---|---|
xrfaceloading | This event is emitted when when loading begins for additional face AR resources. |
xrfacescanning | This event is emitted when AR resources have been loaded and scanning has begun. |
xrfacefound | This event is emitted when a face is first found. |
xrfacepdated | This event is emitted when face is subsequently found. |
xrfacelost | This event is emitted when a face is no longer being tracked. |
Description
This event is emitted when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status.
var handleCameraStatusChange = function handleCameraStatusChange(event) {
console.log('status change', event.detail.status);
switch (event.detail.status) {
case 'requesting':
// Do something
break;
case 'hasStream':
// Do something
break;
case 'failed':
event.target.emit('realityerror');
break;
}
};
let scene = this.el.sceneEl
scene.addEventListener('camerastatuschange', handleCameraStatusChange)
Description
This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed.
let scene = this.el.sceneEl
scene.addEventListener('realityerror', (event) => {
if (XR8.XrDevice.isDeviceBrowserCompatible()) {
// Browser is compatible. Print the exception for more information.
console.log(event.detail.error)
return
}
// Browser is not compatible. Check the reasons why it may not be.
for (let reason of XR8.XrDevice.incompatibleReasons()) {
// Handle each XR8.XrDevice.IncompatibilityReasons
}
})
Description
This event is emitted when 8th Wall Web has initialized.
let scene = this.el.sceneEl
scene.addEventListener('realityready', () => {
// Hide loading UI
})
Description
This event is emitted in response to the screenshotrequest resulting in an error.
let scene = this.el.sceneEl
scene.addEventListener('screenshoterror', (event) => {
console.log(event.detail)
// Handle screenshot error.
})
Description
This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image of the AFrame canvas will be provided.
let scene = this.el.sceneEl
scene.addEventListener('screenshotready', (event) => {
// screenshotPreview is an <img> HTML element
const image = document.getElementById('screenshotPreview')
image.src = 'data:image/jpeg;base64,' + event.detail
})
Description
This event is emitted by xrweb
when detection image loading begins.
imageloading.detail : { imageTargets: {name, type, metadata} }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' . |
metadata | User metadata. |
const componentMap = {}
const addComponents = ({detail}) => {
detail.imageTargets.forEach(({name, type, metadata}) => {
// ...
})
}
this.el.sceneEl.addEventListener('xrimageloading', addComponents)
Description
This event is emitted by xrweb
when all detection images have been loaded and scanning has begun.
imagescanning.detail : { imageTargets: {name, type, metadata, geometry} }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' . |
metadata | User metadata. |
geometry | Object containing geometry data. If type=FLAT: {scaledWidth, scaledHeight} , lse if type=CYLINDRICAL or type=CONICAL: {height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians} |
If type = FLAT
, geometry:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
, geometry:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
Description
This event is emitted by xrweb
when an image target is first found.
imagefound.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
AFRAME.registerComponent('my-named-image-target', {
schema: {
name: { type: 'string' }
},
init: function () {
const object3D = this.el.object3D
const name = this.data.name
object3D.visible = false
const showImage = ({detail}) => {
if (name != detail.name) {
return
}
object3D.position.copy(detail.position)
object3D.quaternion.copy(detail.rotation)
object3D.scale.set(detail.scale, detail.scale, detail.scale)
object3D.visible = true
}
const hideImage = ({detail}) => {
if (name != detail.name) {
return
}
object3D.visible = false
}
this.el.sceneEl.addEventListener('xrimagefound', showImage)
this.el.sceneEl.addEventListener('xrimageupdated', showImage)
this.el.sceneEl.addEventListener('xrimagelost', hideImage)
}
})
Description
This event is emitted by xrweb
when an image target changes position, rotation or scale.
imageupdated.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
AFRAME.registerComponent('my-named-image-target', {
schema: {
name: { type: 'string' }
},
init: function () {
const object3D = this.el.object3D
const name = this.data.name
object3D.visible = false
const showImage = ({detail}) => {
if (name != detail.name) {
return
}
object3D.position.copy(detail.position)
object3D.quaternion.copy(detail.rotation)
object3D.scale.set(detail.scale, detail.scale, detail.scale)
object3D.visible = true
}
const hideImage = ({detail}) => {
if (name != detail.name) {
return
}
object3D.visible = false
}
this.el.sceneEl.addEventListener('xrimagefound', showImage)
this.el.sceneEl.addEventListener('xrimageupdated', showImage)
this.el.sceneEl.addEventListener('xrimagelost', hideImage)
}
})
Description
This event is emitted by xrweb
when an image target is no longer being tracked.
imagelost.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
AFRAME.registerComponent('my-named-image-target', {
schema: {
name: { type: 'string' }
},
init: function () {
const object3D = this.el.object3D
const name = this.data.name
object3D.visible = false
const showImage = ({detail}) => {
if (name != detail.name) {
return
}
object3D.position.copy(detail.position)
object3D.quaternion.copy(detail.rotation)
object3D.scale.set(detail.scale, detail.scale, detail.scale)
object3D.visible = true
}
const hideImage = ({detail}) => {
if (name != detail.name) {
return
}
object3D.visible = false
}
this.el.sceneEl.addEventListener('xrimagefound', showImage)
this.el.sceneEl.addEventListener('xrimageupdated', showImage)
this.el.sceneEl.addEventListener('xrimagelost', hideImage)
}
})
Description
This event is emitted when a mesh is first found either after start or after a recenter().
xrmeshfound.detail : { id, position, rotation, bufferGeometry }
Property | Description |
---|---|
id | An id for this mesh that is stable within a session |
position: {x, y, z} |
The 3d position of the located mesh. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located mesh. |
bufferGeometry: | A THREE.BufferGeometry mesh. |
Description
This event is emitted when the first mesh found changes position or rotation.
xrmeshupdated.detail : { id, position, rotation }
Property | Description |
---|---|
id | An id for this mesh that is stable within a session |
position: {x, y, z} |
The 3d position of the located mesh. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located mesh. |
Description
This event is emitted when recenter() is called.
xrmeshlost.detail : { id }
Property | Description |
---|---|
id | An id for this mesh that is stable within a session |
Description
This event is emitted when all Project Wayspots have been loaded for scanning.
xrprojectwayspotscanning.detail : { wayspots: [] }
Property | Description |
---|---|
wayspots: [] | An array objects containing Wayspot information. |
wayspots
is an array of objects with the following properties:
Property | Description |
---|---|
id | An id for this Project Wayspot that is stable within a session |
name | Project Wayspot name. |
imageUrl | URL to a representative image for this Project Wayspot. |
title | Project Wayspot title. |
lat | Latitude of this Project Wayspot. |
lng | Longitude of this Project Wayspot. |
Description
This event is emitted when a Project Wayspot is first found.
xrprojectwayspotfound.detail : { name, position, rotation }
Property | Description |
---|---|
name | The Project Wayspot name. |
position: {x, y, z} |
The 3d position of the located Project Wayspot. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located Project Wayspot. |
Description
This event is emitted when a Project Wayspot changes position or rotation.
xrprojectwayspotupdated.detail : { name, position, rotation }
Property | Description |
---|---|
name | The Project Wayspot name. |
position: {x, y, z} |
The 3d position of the located Project Wayspot. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located Project Wayspot. |
Description
This event is emitted when a Project Wayspot is no longer being tracked.
xrprojectwayspotlost.detail : { name, position, rotation }
Property | Description |
---|---|
name | The Project Wayspot name. |
position: {x, y, z} |
The 3d position of the located Project Wayspot. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located Project Wayspot. |
Description
This event is emitted by xrweb
when XrController
is loaded and any time tracking status or reason changes.
xrtrackingstatus : { status, reason }
Property | Description |
---|---|
status | One of 'LIMITED' or 'NORMAL' . |
reason | One of 'INITIALIZING' or 'UNDEFINED' . |
const updateScene = ({detail}) => {
const {status, reason} = detail
if (status === 'NORMAL') {
// Show scene
}
}
this.el.sceneEl.addEventListener('xrtrackingstatus', updateScene)
Description
This event is emitted by xrface
when when loading begins for additional face AR resources.
xrfaceloading.detail : {maxDetections, pointsPerDetection, indices, uvs}
Property | Description |
---|---|
maxDetections | The maximum number of faces that can be simultaneously processed. |
pointsPerDetection | Number of vertices that will be extracted per face. |
indices: [{a, b, c}] | Indexes into the vertices array that form the triangles of the requested mesh, as specified with meshGeometry on configure. |
uvs: [{u, v}] | uv positions into a texture map corresponding to the returned vertex points. |
const initMesh = ({detail}) => {
const {pointsPerDetection, uvs, indices} = detail
this.el.object3D.add(generateMeshGeometry({pointsPerDetection, uvs, indices}))
}
this.el.sceneEl.addEventListener('xrfaceloading', initMesh)
Description
This event is emitted by xrface
when all face AR resources have been loaded and scanning has begun.
xrfacescanning.detail : {maxDetections, pointsPerDetection, indices, uvs}
Property | Description |
---|---|
maxDetections | The maximum number of faces that can be simultaneously processed. |
pointsPerDetection | Number of vertices that will be extracted per face. |
indices: [{a, b, c}] | Indexes into the vertices array that form the triangles of the requested mesh, as specified with meshGeometry on configure. |
uvs: [{u, v}] | uv positions into a texture map corresponding to the returned vertex points. |
const initMesh = ({detail}) => {
const {pointsPerDetection, uvs, indices} = detail
this.el.object3D.add(generateMeshGeometry({pointsPerDetection, uvs, indices}))
}
this.el.sceneEl.addEventListener('xrfacescanning', initMesh)
Description
This event is emitted by xrface
when a face is first found.
xrfacefound.detail : {id, transform, vertices, normals, attachmentPoints}
Property | Description |
---|---|
id | A numerical id of the located face. |
transform: {position, rotation, scale, scaledWidth, scaledHeight, scaledDepth} |
Transform information of the located face. |
vertices: [{x, y, z}] | Position of face points, relative to transform. |
normals: [{x, y, z}] | Normal direction of vertices, relative to transform. |
attachmentPoints: { name, position: {x,y,z} } | See XR8.FaceController.AttachmentPoints for list of available attachment points. position is relative to the transform. |
transform
is an object with the following properties:
Property | Description |
---|---|
position {x, y, z} | The 3d position of the located face. |
rotation {w, x, y, z} | The 3d local orientation of the located face. |
scale | A scale factor that should be applied to objects attached to this face. |
scaledWidth | Approximate width of the head in the scene when multiplied by scale. |
scaledHeight | Approximate height of the head in the scene when multiplied by scale. |
scaledDepth | Approximate depth of the head in the scene when multiplied by scale. |
const faceRigidComponent = {
init: function () {
const object3D = this.el.object3D
object3D.visible = false
const show = ({detail}) => {
const {position, rotation, scale} = detail.transform
object3D.position.copy(position)
object3D.quaternion.copy(rotation)
object3D.scale.set(scale, scale, scale)
object3D.visible = true
}
const hide = ({detail}) => { object3D.visible = false }
this.el.sceneEl.addEventListener('xrfacefound', show)
this.el.sceneEl.addEventListener('xrfaceupdated', show)
this.el.sceneEl.addEventListener('xrfacelost', hide)
}
}
Description
This event is emitted by xrface
when face is subsequently found.
xrfaceupdated.detail : {id, transform, vertices, normals, attachmentPoints}
Property | Description |
---|---|
id | A numerical id of the located face. |
transform: {position, rotation, scale, scaledWidth, scaledHeight, scaledDepth} |
Transform information of the located face. |
vertices: [{x, y, z}] | Position of face points, relative to transform. |
normals: [{x, y, z}] | Normal direction of vertices, relative to transform. |
attachmentPoints: { name, position: {x,y,z} } | See XR8.FaceController.AttachmentPoints for list of available attachment points. position is relative to the transform. |
transform
is an object with the following properties:
Property | Description |
---|---|
position {x, y, z} | The 3d position of the located face. |
rotation {w, x, y, z} | The 3d local orientation of the located face. |
scale | A scale factor that should be applied to objects attached to this face. |
scaledWidth | Approximate width of the head in the scene when multiplied by scale. |
scaledHeight | Approximate height of the head in the scene when multiplied by scale. |
scaledDepth | Approximate depth of the head in the scene when multiplied by scale. |
const faceRigidComponent = {
init: function () {
const object3D = this.el.object3D
object3D.visible = false
const show = ({detail}) => {
const {position, rotation, scale} = detail.transform
object3D.position.copy(position)
object3D.quaternion.copy(rotation)
object3D.scale.set(scale, scale, scale)
object3D.visible = true
}
const hide = ({detail}) => { object3D.visible = false }
this.el.sceneEl.addEventListener('xrfacefound', show)
this.el.sceneEl.addEventListener('xrfaceupdated', show)
this.el.sceneEl.addEventListener('xrfacelost', hide)
}
}
Description
This event is emitted by xrface
when a face is no longer being tracked.
xrfacelost.detail : {id}
Property | Description |
---|---|
id | A numerical id of the face that was lost. |
const faceRigidComponent = {
init: function () {
const object3D = this.el.object3D
object3D.visible = false
const show = ({detail}) => {
const {position, rotation, scale} = detail.transform
object3D.position.copy(position)
object3D.quaternion.copy(rotation)
object3D.scale.set(scale, scale, scale)
object3D.visible = true
}
const hide = ({detail}) => { object3D.visible = false }
this.el.sceneEl.addEventListener('xrfacefound', show)
this.el.sceneEl.addEventListener('xrfaceupdated', show)
this.el.sceneEl.addEventListener('xrfacelost', hide)
}
}
This section describes the events that are listened for by the "xrweb" A-Frame component
You can emit these events in your web application to perform various actions:
Event Listener | Description |
---|---|
hidecamerafeed | Hides the camera feed. Tracking does not stop. |
recenter | Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter. |
screenshotrequest | Emits a request to the engine to capture a screenshot of the AFrame canvas. The engine will emit a screenshotready event with the JPEG compressed image or screenshoterror if an error has occured. |
showcamerafeed | Shows the camera feed. |
stopxr | Stop the current XR session. While stopped, the camera feed is stopped and device motion is not tracked. |
scene.emit('hidecamerafeed')
Parameters
None
Description
Hides the camera feed. Tracking does not stop.
let scene = this.el.sceneEl
scene.emit('hidecamerafeed')
scene.emit('recenter', {origin, facing})
Parameters
Parameter | Description |
---|---|
origin: {x, y, z} [Optional] | The location of the new origin. |
facing: {w, x, y, z} [Optional] | A quaternion representing direction the camera should face at the origin. |
Description
Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.
If origin and facing are not provided, camera is reset to origin previously specified by a call to recenter or the last call to updateCameraProjectionMatrix(). Note: with A-Frame, updateCameraProjectionMatrix() initially gets called based on initial camera position in the scene.
let scene = this.el.sceneEl
scene.emit('recenter')
// OR
let scene = this.el.sceneEl
scene.emit('recenter', {
origin: {x: 1, y: 4, z: 0},
facing: {w: 0.9856, x:0, y:0.169, z:0}
})
scene.emit('screenshotrequest')
Parameters
None
Description
Emits a request to the engine to capture a screenshot of the AFrame canvas. The engine will emit a screenshotready event with the JPEG compressed image or screenshoterror if an error has occured.
const scene = this.el.sceneEl
const photoButton = document.getElementById('photoButton')
// Emit screenshotrequest when user taps
photoButton.addEventListener('click', () => {
image.src = ""
scene.emit('screenshotrequest')
})
scene.addEventListener('screenshotready', event => {
image.src = 'data:image/jpeg;base64,' + event.detail
})
scene.addEventListener('screenshoterror', event => {
console.log("error")
})
scene.emit('showcamerafeed')
Parameters
None
Description
Shows the camera feed.
let scene = this.el.sceneEl
scene.emit('showcamerafeed')
scene.emit('stopxr')
Parameters
None
Description
Stop the current XR session. While stopped, the camera feed is stopped and device motion is not tracked.
let scene = this.el.sceneEl
scene.emit('stopxr')
Babylon.js (https://www.babylonjs.com/) is a complete JavaScript framework for building 3D games and experiences with HTML5 and WebGL. Combined with 8th Wall Web, you can create powerful Web AR experiences.
Tutorial Video:
Description
Provides an integration that interfaces with the BabylonJS environment and lifecyle to drive the Babylon.js camera to do virtual overlays.
Functions
Function | Description |
---|---|
xrCameraBehavior | Get a behavior that can be attached to a Babylon camera to run World Tracking and/or Image Targets. |
faceCameraBehavior | Get a behavior that can be attached to a Babylon camera to run Face Effects. |
XR8.Babylonjs.faceCameraBehavior(config, faceConfig)
Description
Get a behavior that can be attached to a Babylon camera like so: camera.addBehavior(XR8.Babylonjs.faceCameraBehavior())
Parameters
Parameter | Description |
---|---|
config [Optional] | Configuration parameters to pass to XR8.run() |
faceConfig [Optional] | Face configuration parameters to pass to XR8.FaceController |
config
[Optional] is an object with the following properties:
Property | Type | Default | Description |
---|---|---|---|
webgl2 [Optional] | bool | false | If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1. |
ownRunLoop [Optional] | bool | true | If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only] |
cameraConfig: {direction} [Optional] | object | {direction: XR8.XrConfig.camera().BACK} |
Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT |
glContextConfig [Optional] | WebGLContextAttributes | null | The attributes to configure the WebGL canvas context. |
allowedDevices [Optional] | XR8.XrConfig.device() | XR8.XrConfig.device().MOBILE |
Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY , always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE . |
faceConfig
[Optional] is an object with the following properties:
Parameter | Description |
---|---|
nearClip [Optional] | The distance from the camera of the near clip plane. By default it will use the Babylon camera.minZ |
farClip [Optional] | The distance from the camera of the far clip plane. By default it will use the Babylon camera.maxZ |
meshGeometry [Optional] | List that contains which parts of the head geometry are visible. Options are: [XR8.FaceController.MeshGeometry.FACE, XR8.FaceController.MeshGeometry.EYES, XR8.FaceController.MeshGeometry.NOSE,] . The default is [XR8.FaceController.MeshGeometry.FACE] |
imageTargets [Optional] | List of names of the image target to detect. Can be modified at runtime. Note: All currently active image targets will be replaced with the ones specified in this list. |
leftHandedAxes [Optional] | If true, use left-handed coordinates. |
imageTargets [Optional] | If true, flip left and right in the output. |
Returns
A Babylon JS behavior that connects the Face Effects engine to the Babylon camera and starts the camera feed and tracking.
const startScene = (canvas) => {
const engine = new BABYLON.Engine(canvas, true /* antialias */)
const scene = new BABYLON.Scene(engine)
scene.useRightHandedSystem = false
const camera = new BABYLON.FreeCamera('camera', new BABYLON.Vector3(0, 0, 0), scene)
camera.rotation = new BABYLON.Vector3(0, scene.useRightHandedSystem ? Math.PI : 0, 0)
camera.minZ = 0.0001
camera.maxZ = 10000
// Add a light to the scene
const directionalLight =
new BABYLON.DirectionalLight("DirectionalLight", new BABYLON.Vector3(-5, -10, 7), scene)
directionalLight.intensity = 0.5
// Mesh logic
const faceMesh = new BABYLON.Mesh("face", scene);
const material = new BABYLON.StandardMaterial("boxMaterial", scene)
material.diffuseColor = new BABYLON.Color3(173 / 255.0, 80 / 255.0, 255 / 255.0)
faceMesh.material = material
let facePoints = []
const runConfig = {
cameraConfig: {XR8.XrConfig.camera().FRONT},
allowedDevices: XR8.XrConfig.device().ANY,
verbose: true,
}
camera.addBehavior(XR8.Babylonjs.faceCameraBehavior(runConfig)) // Connect camera to XR and show camera feed.
engine.runRenderLoop(() => {
scene.render()
})
}
XR8.Babylonjs.xrCameraBehavior(config, xrConfig)
Description
Get a behavior that can be attached to a Babylon camera like so: camera.addBehavior(XR8.Babylonjs.xrCameraBehavior())
Parameters
Parameter | Description |
---|---|
config [Optional] | Configuration parameters to pass to XR8.run() |
xrConfig [Optional] | Configuration parameters to pass to XR8.XrController |
config
[Optional] is an object with the following properties:
Property | Type | Default | Description |
---|---|---|---|
webgl2 [Optional] | bool | false | If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1. |
ownRunLoop [Optional] | bool | false | If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only] |
cameraConfig: {direction} [Optional] | object | {direction: XR8.XrConfig.camera().BACK} |
Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT |
glContextConfig [Optional] | WebGLContextAttributes | null | The attributes to configure the WebGL canvas context. |
allowedDevices [Optional] | XR8.XrConfig.device() | XR8.XrConfig.device().MOBILE |
Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY , always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE . |
xrConfig
[Optional] is an object with the following properties:
Parameter | Description |
---|---|
enableLighting [Optional] | If true, return an estimate of lighting information. |
enableWorldPoints [Optional] | If true, return the map points used for tracking. |
disableWorldTracking [Optional] | If true, turn off SLAM tracking for efficiency. |
imageTargets [Optional] | List of names of the image target to detect. Can be modified at runtime. Note: All currently active image targets will be replaced with the ones specified in this list. |
leftHandedAxes [Optional] | If true, use left-handed coordinates. |
imageTargets [Optional] | If true, flip left and right in the output. |
Returns
A Babylon JS behavior that connects the XR engine to the Babylon camera and starts the camera feed and tracking.
let surface, engine, scene, camera
const startScene = () => {
const canvas = document.getElementById('renderCanvas')
engine = new BABYLON.Engine(canvas, true, { stencil: true, preserveDrawingBuffer: true })
engine.enableOfflineSupport = false
scene = new BABYLON.Scene(engine)
camera = new BABYLON.FreeCamera('camera', new BABYLON.Vector3(0, 3, 0), scene)
initXrScene({ scene, camera }) // Add objects to the scene and set starting camera position.
// Connect the camera to the XR engine and show camera feed
camera.addBehavior(XR8.Babylonjs.xrCameraBehavior())
engine.runRenderLoop(() => {
scene.render()
})
window.addEventListener('resize', () => {
engine.resize()
})
}
Image Target Observables
onXrImageLoadingObservable: Fires when detection image loading begins.
onXrImageLoadingObservable : { imageTargets: {name, type, metadata} }
onXrImageScanningObservable: Fires when all detection images have been loaded and scanning has begun.
onXrImageScanningObservable : { imageTargets: {name, type, metadata, geometry} }
onXrImageFoundObservable: Fires when an image target is first found.
onXrImageFoundObservable : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
onXrImageUpdatedObservable: Fires when an image target changes position, rotation or scale.
onXrImageUpdatedObservable : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
onXrImageLostObservable: Fires when an image target is no longer being tracked.
onXrImageLostObservable : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Face Effects Observables
onFaceLoadingObservable: Fires when loading begins for additional face AR resources.
onFaceLoadingObservable : {maxDetections, pointsPerDetection, indices, uvs}
onFaceScanningObservable: Fires when all face AR resources have been loaded and scanning has begun.
onFaceScanningObservable: {maxDetections, pointsPerDetection, indices, uvs}
onFaceFoundObservable: Fires when a face is first found.
onFaceFoundObservable : {id, transform, attachmentPoints, vertices, normals}
onFaceUpdatedObservable: Fires when a face is subsequently found.
onFaceUpdatedObservable : {id, transform, attachmentPoints, vertices, normals}
onFaceLostObservable: Fires when a face is no longer being tracked.
onFaceLostObservable : {id}
scene.onXrImageUpdatedObservable.add(e => {
target.position.copyFrom(e.position)
target.rotationQuaternion.copyFrom(e.rotation)
target.scaling.set(e.scale, e.scale, e.scale)
})
// this is called when the face is first found. It provides the static information about the
// face such as the UVs and indices
scene.onFaceLoadingObservable.add((event) => {
const {indices, maxDetections, pointsPerDetection, uvs} = event
// Babylon expects all vertex data to be a flat list of numbers
facePoints = Array(pointsPerDetection)
for (let i = 0; i < pointsPerDetection; i++) {
const facePoint = BABYLON.MeshBuilder.CreateBox("box", {size: 0.02}, scene)
facePoint.material = material
facePoint.parent = faceMesh
facePoints[i] = facePoint
}
})
// this is called each time the face is updated which is on a per-frame basis
scene.onFaceUpdatedObservable.add((event) => {
const {vertices, normals, transform} = event;
const {scale, position, rotation} = transform
vertices.map((v, i) => {
facePoints[i].position.x = v.x
facePoints[i].position.y = v.y
facePoints[i].position.z = v.z
})
faceMesh.scalingDeterminant = scale
faceMesh.position = position
faceMesh.rotationQuaternion = rotation
})
8th Wall camera applications are built using a camera pipeline module framework. Applications install modules which then control the behavior of the application at runtime.
Refer to XR8.addCameraPipelineModule() for details on adding camera pipeline modules to your application.
A camera pipeline module object must have a .name string which is unique within the application. It should implement one or more of the following camera lifecycle methods. These methods will be executed at the appropriate point in the run loop.
During the main runtime of an application, each camera frame goes through the following cycle:
onBeforeRun -> onCameraStatusChange (requesting -> hasStream -> hasVideo | failed) -> onStart -> onAttach -> onProcessGpu -> onProcessCpu -> onUpdate -> onRender
Camera modules should implement one or more of the following camera lifecycle methods:
Function | Description |
---|---|
onAppResourcesLoaded | Called when we have received the resources attached to an app from the server. |
onAttach | Called before the first time a module receives frame updates. It is called on modules that were added either before or after the pipeline is running. |
onBeforeRun | Called immediately after XR8.run(). If any promises are returned, XR will wait on all promises before continuing. |
onCameraStatusChange | Called when a change occurs during the camera permissions request. |
onCanvasSizeChange | Called when the canvas changes size. |
onDetach | is called after the last time a module receives frame updates. This is either after the engine is stopped or the module is manually removed from the pipeline, whichever comes first. |
onDeviceOrientationChange | Called when the device changes landscape/portrait orientation. |
onException | Called when an error occurs in XR. Called with the error object. |
onPaused | Called when XR8.pause() is called. |
onProcessCpu | Called to read results of GPU processing and return usable data. |
onProcessGpu | Called to start GPU processing. |
onRemove | is called when a module is removed from the pipeline. |
onRender | Called after onUpdate. This is the time for the rendering engine to issue any WebGL drawing commands. If an application is providing its own run loop and is relying on XR8.runPreRender() and XR8.runPostRender(), this method is not called and all rendering must be coordinated by the external run loop. |
onResume | Called when XR8.resume() is called. |
onStart | Called when XR starts. First callback after XR8.run() is called. |
onUpdate | Called to update the scene before render. Data returned by modules in onProcessGpu and onProcessCpu will be present as processGpu.modulename and processCpu.modulename where the name is given by module.name = "modulename". |
onVideoSizeChange | Called when the canvas changes size. |
requiredPermissions | Modules can indicate what browser capabilities they require that may need permissions requests. These can be used by the framework to request appropriate permissions if absent, or to create components that request the appropriate permissions before running XR. |
Note: Camera modules that implement onProcessGpu or onProcessCpu can provide data to subsequent stages of the pipeline. This is done by the module's name.
onAppResourcesLoaded: ({ framework, imageTargets, version })
Description
Called when we have received the resources attached to an app from the server.
Parameters
Parameter | Description |
---|---|
framework | The framework bindings for this module for dispatching events. |
imageTargets [Optional] | An array of image targets with the fields {imagePath, metadata, name} |
version | The engine version, e.g. 14.0.8.949 |
XR8.addCameraPipelineModule({
name = 'myPipelineModule',
onAppResourcesLoaded = ({ framework, version, imageTargets }) => {
//...
},
})
onAttach: ({framework, canvas, GLctx, computeCtx, isWebgl2, orientation, videoWidth, videoHeight, canvasWidth, canvasHeight, status, stream, video, version, imageTargets, config})
Description
onAttach()
is called before the first time a module receives frame updates. It is called on modules that were added either before or after the pipeline is running. It includes all the most recent data available from:
Parameters
Parameter | Description |
---|---|
framework | The framework bindings for this module for dispatching events. |
canvas | The canvas that backs GPU processing and user display. |
GLctx | The drawing canvas's WebGLRenderingContext or WebGL2RenderingContext. |
computeCtx | The compute canvas's WebGLRenderingContext or WebGL2RenderingContext. |
isWebgl2 | True if GLCtx is a WebGL2RenderingContext. |
orientation | The rotation of the ui from portrait, in degrees (-90, 0, 90, 180). |
videoWidth | The height of the camera feed, in pixels. |
videoHeight | The height of the camera feed, in pixels. |
canvasWidth | The width of the GLctx canvas, in pixels. |
canvasHeight | The height of the GLctx canvas, in pixels. |
status | One of [ 'requesting' , 'hasStream' , 'hasVideo' , 'failed' ] |
stream | The MediaStream associated with the camera feed. |
video | The video dom element displaying the stream. |
version [Optional] | The engine version, e.g. 14.0.8.949, if app resources are loaded. |
imageTargets [Optional] | An array of image targets with the fields {imagePath, metadata, name} |
config | The configuration parameters that were passed to XR8.run(). |
onCameraStatusChange: ({ status, stream, video, config })
Description
Called when a change occurs during the camera permissions request.
Called with the status, and, if applicable, a reference to the newly available data. The typical status flow will be:
requesting -> hasStream -> hasVideo.
Parameters
Parameter | Description |
---|---|
status | One of [ 'requesting' , 'hasStream' , 'hasVideo' , 'failed' ] |
stream: [Optional] | The MediaStream associated with the camera feed, if status is hasStream. |
video: [Optional] | The video DOM element displaying the stream, if status is hasVideo. |
config | The configuration parameters that were passed to XR8.run(), if status is "requesting". |
The status
parameter has the following states:
State | Description |
---|---|
requesting | In 'requesting', the browser is opening the camera, and if applicable, checking the user permissons. In this state, it is appropriate to display a prompt to the user to accept camera permissions. |
hasStream | Once the user permissions are granted and the camera is successfully opened, the status switches to 'hasStream' and any user prompts regarding permissions can be dismissed. |
hasVideo | Once camera frame data starts to be available for processing, the status switches to 'hasVideo', and the camera feed can begin displaying. |
failed | If the camera feed fails to open, the status is 'failed'. In this case it's possible that the user has denied permissions, and so helping them to re-enable permissions is advisable. |
XR8.addCameraPipelineModule({
name: 'camerastartupmodule',
onCameraStatusChange: ({status}) {
if (status == 'requesting') {
myApplication.showCameraPermissionsPrompt()
} else if (status == 'hasStream') {
myApplication.dismissCameraPermissionsPrompt()
} else if (status == 'hasVideo') {
myApplication.startMainApplictation()
} else if (status == 'failed') {
myApplication.promptUserToChangeBrowserSettings()
}
},
})
onCanvasSizeChange: ({ GLctx, computeCtx, videoWidth, videoHeight, canvasWidth, canvasHeight })
Description
Called when the canvas changes size. Called with dimensions of video and canvas.
Parameters
Parameter | Description |
---|---|
GLctx | The drawing canvas's WebGLRenderingContext or WebGL2RenderingContext. |
computeCtx | The compute canvas's WebGLRenderingContext or WebGL2RenderingContext. |
videoWidth | The width of the camera feed, in pixels. |
videoHeight | The height of the camera feed, in pixels. |
canvasWidth | The width of the GLctx canvas, in pixels. |
canvasHeight | The height of the GLctx canvas, in pixels. |
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onCanvasSizeChange: ({ GLctx, videoWidth, videoHeight, canvasWidth, canvasHeight }) => {
myHandleResize({ GLctx, videoWidth, videoHeight, canvasWidth, canvasHeight })
},
})
onDetach: ({framework})
Description
onDetach
is called after the last time a module receives frame updates. This is either after the engine is stopped or the module is manually removed from the pipeline, whichever comes first.
Parameters
Parameter | Description |
---|---|
framework | The framework bindings for this module for dispatching events. |
onDeviceOrientationChange: ({ GLctx, computeCtx, videoWidth, videoHeight, orientation })
Description
Called when the device changes landscape/portrait orientation.
Parameters
Parameter | Description |
---|---|
GLctx | The drawing canvas's WebGLRenderingContext or WebGL2RenderingContext. |
computeCtx | The compute canvas's WebGLRenderingContext or WebGL2RenderingContext. |
videoWidth | The width of the camera feed, in pixels. |
videoHeight | The height of the camera feed, in pixels. |
orientation | The rotation of the ui from portrait, in degrees (-90, 0, 90, 180). |
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onDeviceOrientationChange: ({ GLctx, videoWidth, videoHeight, orientation }) => {
// handleResize({ GLctx, videoWidth, videoHeight, orientation })
},
})
onException: (error)
Description
Called when an error occurs in XR. Called with the error object.
Parameters
Parameter | Description |
---|---|
error | The error object that was thrown |
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onException : (error) => {
console.error('XR threw an exception', error)
},
})
onPaused: ()
Description
Called when XR8.pause() is called.
Parameters
None
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onPaused: () => {
console.log('pausing application')
},
})
onProcessGpu: ({ framework, frameStartResult })
Description
Called to start GPU processing
Parameters
Parameter | Description |
---|---|
framework | { dispatchEvent(eventName, detail) } : Emits a named event with the supplied detail. |
frameStartResult | { cameraTexture, computeTexture, GLctx, computeCtx, textureWidth, textureHeight, orientation, videoTime, repeatFrame } |
The frameStartResult
parameter has the following properties:
Property | Description |
---|---|
cameraTexture | The drawing canvas's WebGLTexture containing camera feed data. |
computeTexture | The compute canvas's WebGLTexture containing camera feed data. |
GLctx | The drawing canvas's WebGLRenderingContext or WebGL2RenderingContext. |
computeCtx | The compute canvas's WebGLRenderingContext or WebGL2RenderingContext. |
textureWidth | The width (in pixels) of the camera feed texture. |
textureHeight | The height (in pixels) of the camera feed texture. |
orientation | The rotation of the ui from portrait, in degrees (-90, 0, 90, 180). |
videoTime | The timestamp of this video frame. |
repeatFrame | True if the camera feed has not updated since the last call. |
Returns
Any data that you wish to provide to onProcessCpu and onUpdate should be returned. It will be provided to those methods as processGpuResult.modulename
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onProcessGpu: ({frameStartResult}) => {
const {cameraTexture, GLctx, textureWidth, textureHeight} = frameStartResult
if(!cameraTexture.name){
console.error("[index] Camera texture does not have a name")
}
const restoreParams = XR8.GlTextureRenderer.getGLctxParameters(GLctx, [GLctx.TEXTURE0])
// Do relevant GPU processing here
...
XR8.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)
// These fields will be provided to onProcessCpu and onUpdate
return {gpuDataA, gpuDataB}
},
})
onProcessCpu: ({ framework, frameStartResult, processGpuResult })
Description
Called to read results of GPU processing and return usable data. Called with { frameStartResult, processGpuResult }
. Data returned by modules in onProcessGpu will be present as processGpu.modulename
where the name is given by module.name = "modulename".
Parameter | Description |
---|---|
framework | The framework bindings for this module for dispatching events. |
frameStartResult | The data that was provided at the beginning of a frame. |
processGpuResult | Data returned by all installed modules during onProcessGpu. |
Returns
Any data that you wish to provide to onUpdate should be returned. It will be provided to that method as processCpuResult.modulename
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onProcessCpu: ({ frameStartResult, processGpuResult }) => {
const GLctx = frameStartResult.GLctx
const { cameraTexture } = frameStartResult
const { camerapixelarray, mycamerapipelinemodule } = processGpuResult
// Do something interesting with mycamerapipelinemodule.gpuDataA and mycamerapipelinemodule.gpuDataB
...
// These fields will be provided to onUpdate
return {cpuDataA, cpuDataB}
},
})
onRemove: ({framework})
Description
onRemove
is called when a module is removed from the pipeline.
Parameters
Parameter | Description |
---|---|
framework | The framework bindings for this module for dispatching events. |
onRender: ()
Description
Called after onUpdate. This is the time for the rendering engine to issue any WebGL drawing commands. If an application is providing its own run loop and is relying on XR8.runPreRender() and XR8.runPostRender(), this method is not called and all rendering must be coordinated by the external run loop.
Parameters
None
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onRender: () => {
// This is already done by XR8.Threejs.pipelineModule() but is provided here as an illustration.
XR8.Threejs.xrScene().renderer.render()
},
})
onResume: ()
Description
Called when XR8.resume() is called.
Parameters
None
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onResume: () => {
console.log('resuming application')
},
})
onStart: ({ canvas, GLctx, computeCtx, isWebgl2, orientation, videoWidth, videoHeight, canvasWidth, canvasHeight, config })
Description
Called when XR starts.
Parameters
Parameter | Description |
---|---|
canvas | The canvas that backs GPU processing and user display. |
GLctx | The drawing canvas's WebGLRenderingContext or WebGL2RenderingContext. |
computeCtx | The compute canvas's WebGLRenderingContext or WebGL2RenderingContext. |
isWebgl2 | True if GLCtx is a WebGL2RenderingContext. |
orientation | The rotation of the ui from portrait, in degrees (-90, 0, 90, 180). |
videoWidth | The height of the camera feed, in pixels. |
videoHeight | The height of the camera feed, in pixels. |
canvasWidth | The width of the GLctx canvas, in pixels. |
canvasHeight | The height of the GLctx canvas, in pixels. |
config | The configuration parameters that were passed to XR8.run(). |
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onStart: ({canvasWidth, canvasHeight}) => {
// Get the 3js scene. This was created by XR8.Threejs.pipelineModule().onStart(). The
// reason we can access it here now is because 'mycamerapipelinemodule' was installed after
// XR8.Threejs.pipelineModule().
const {scene, camera} = XR8.Threejs.xrScene()
// Add some objects to the scene and set the starting camera position.
myInitXrScene({scene, camera})
// Sync the xr controller's 6DoF position and camera paremeters with our scene.
XR8.XrController.updateCameraProjectionMatrix({
origin: camera.position,
facing: camera.quaternion,
})
},
})
onUpdate: ({ framework, frameStartResult, processGpuResult, processCpuResult })
Description
Called to update the scene before render. Called with { framework, frameStartResult, processGpuResult, processCpuResult }
. Data returned by modules in onProcessGpu and onProcessCpu will be present as processGpu.modulename
and processCpu.modulename
where the name is given by module.name = "modulename".
Parameters
Parameter | Description |
---|---|
framework | The framework bindings for this module for dispatching events. |
frameStartResult | The data that was provided at the beginning of a frame. |
processGpuResult | Data returned by all installed modules during onProcessGpu. |
processCpuResult | Data returned by all installed modules during onProcessCpu. |
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onUpdate: ({ frameStartResult, processGpuResult, processCpuResult }) => {
if (!processCpuResult.reality) {
return
}
const {rotation, position, intrinsics} = processCpuResult.reality
const {cpuDataA, cpuDataB} = processCpuResult.mycamerapipelinemodule
// ...
},
})
onVideoSizeChange: ({ GLctx, computeCtx, videoWidth, videoHeight, canvasWidth, canvasHeight, orientation })
Description
Called when the canvas changes size. Called with dimensions of video and canvas as well as device orientation.
Parameters
Parameters | Description |
---|---|
GLctx | The drawing canvas's WebGLRenderingContext or WebGL2RenderingContext. |
computeCtx | The compute canvas's WebGLRenderingContext or WebGL2RenderingContext. |
videoWidth | The width of the camera feed, in pixels. |
videoHeight | The height of the camera feed, in pixels. |
canvasWidth | The width of the GLctx canvas, in pixels. |
canvasHeight | The height of the GLctx canvas, in pixels. |
orientation | The rotation of the ui from portrait, in degrees (-90, 0, 90, 180). |
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onVideoSizeChange: ({ GLctx, videoWidth, videoHeight, canvasWidth, canvasHeight }) => {
myHandleResize({ GLctx, videoWidth, videoHeight, canvasWidth, canvasHeight })
},
})
requiredPermissions: ([permissions])
Description
requiredPermissions
is used to define the list of permissions required by a pipeline module.
Parameters
Parameter | Description |
---|---|
permissions | An array of XR8.XrPermissions.permissions() required by the pipeline module. |
XR8.addCameraPipelineModule({
name: 'request-gyro',
requiredPermissions: () => ([XR8.XrPermissions.permissions().DEVICE_ORIENTATION]),
})
Description
Provides a camera pipeline module that gives access to camera data as a grayscale or color uint8 array.
Functions
Function | Description |
---|---|
pipelineModule | A pipeline module that provides the camera texture as an array of RGBA or grayscale pixel values that can be used for CPU image processing. |
XR8.CameraPixelArray.pipelineModule({ luminance, maxDimension, width, height })
Description
A pipeline module that provides the camera texture as an array of RGBA or grayscale pixel values that can be used for CPU image processing.
Parameters
Parameter | Default | Description |
---|---|---|
luminance [Optional] | false | If true, output grayscale instead of RGBA |
maxDimension: [Optional] | The size in pixels of the longest dimension of the output image. The shorter dimension will be scaled relative to the size of the camera input so that the image is resized without cropping or distortion. | |
width [Optional] | The width of the camera feed texture. | Width of the output image. Ignored if maxDimension is specified. |
height [Optional] | The height of the camera feed texture. | Height of the output image. Ignored if maxDimension is specified. |
Returns
Return value is an object made available to onProcessCpu and onUpdate as:
processGpuResult.camerapixelarray: {rows, cols, rowBytes, pixels}
Property | Description |
---|---|
rows | Height in pixels of the output image. |
cols | Width in pixels of the output image. |
rowBytes | Number of bytes per row of the output image. |
pixels | A UInt8Array of pixel data. |
srcTex | A texture containing the source image for the returned pixels. |
XR8.addCameraPipelineModule(XR8.CameraPixelArray.pipelineModule({ luminance: true }))
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onProcessCpu: ({ processGpuResult }) => {
const { camerapixelarray } = processGpuResult
if (!camerapixelarray || !camerapixelarray.pixels) {
return
}
const { rows, cols, rowBytes, pixels } = camerapixelarray
...
},
Description
Provides a camera pipeline module that can generate screenshots of the current scene.
Functions
Function | Description |
---|---|
configure | Configures the expected result of canvas screenshots. |
pipelineModule | Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started and when the canvas size has changed. |
setForegroundCanvas | Sets a foreground canvas to be displayed on top of the camera canvas. This must be the same dimensions as the camera canvas. |
takeScreenshot | Returns a Promise that when resolved, provides a buffer containing the JPEG compressed image. When rejected, an error message is provided. |
XR8.CanvasScreenshot.configure({ maxDimension, jpgCompression })
Description
Configures the expected result of canvas screenshots.
Parameters
Parameter | Default | Description |
---|---|---|
maxDimension: [Optional] | 1280 | The value of the largest expected dimension. |
jpgCompression: [Optional] | 75 | 1-100 value representing the JPEG compression quality. 100 is little to no loss, and 1 is a very low quality image. |
XR8.CanvasScreenshot.configure({ maxDimension: 640, jpgCompression: 50 })
XR8.CanvasScreenshot.pipelineModule()
Description
Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started and when the canvas size has changed.
Parameters
None
Returns
A CanvasScreenshot pipeline module that can be added via XR8.addCameraPipelineModule().
XR8.addCameraPipelineModule(XR8.CanvasScreenshot.pipelineModule())
XR8.CanvasScreenshot.setForegroundCanvas(canvas)
Description
Sets a foreground canvas to be displayed on top of the camera canvas. This must be the same dimensions as the camera canvas.
Only required if you use separate canvases for camera feed vs virtual objects.
Parameters
Parameter | Description |
---|---|
canvas | The canvas to use as a foreground in the screenshot |
const myOtherCanvas = document.getElementById('canvas2')
XR8.CanvasScreenshot.setForegroundCanvas(myOtherCanvas)
XR8.CanvasScreenshot.takeScreenshot({ onProcessFrame })
Description
Returns a Promise that when resolved, provides a buffer containing the JPEG compressed image. When rejected, an error message is provided.
Parameters
Parameter | Description |
---|---|
onProcessFrame [Optional] | Callback where you can implement additional drawing to the screenshot 2d canvas. |
XR8.addCameraPipelineModule(XR8.canvasScreenshot().cameraPipelineModule())
XR8.canvasScreenshot().takeScreenshot().then(
data => {
// myImage is an <img> HTML element
const image = document.getElementById('myImage')
image.src = 'data:image/jpeg;base64,' + data
},
error => {
console.log(error)
// Handle screenshot error.
})
})
Description
Provides a module that generates a Coaching Overlay for your Absolute Scale Web AR experience.
For information on the Lightship VPS Coaching Overlay, please see here.
Functions
Function | Description |
---|---|
configure | Configures Coaching Overlay settings. |
pipelineModule | Creates a camera pipeline module that, when installed, adds coaching overlay functionality to your project. |
CoachingOverlay.configure({ animationColor, promptColor, promptText, disablePrompt })
Description
Configures behavior and look of the coaching overlay.
Parameters (All Optional)
Parameter | Type | Default | Description |
---|---|---|---|
animationColor | String | "white" | Color of the coaching overlay animation. This parameter accepts valid CSS color arguments. |
promptColor | String | "white" | Color of all the coaching overlay text. This parameter accepts valid CSS color arguments. |
promptText | String | "Move device forward and back" | Sets the text string for the animation explainer text that informs users of the motion they need to make to generate scale. |
disablePrompt | Boolean | false | Set to true to hide default coaching overlay in order to use coaching overlay events for a custom overlay. |
CoachingOverlay.configure({
animationColor: '#E86FFF',
promptText: 'To generate scale push your phone forward and then pull back',
})
CoachingOverlay.pipelineModule()
Description
Creates a pipeline module that, when installed, adds coaching overlay functionality to your absolute scale project.
Parameters
None
Returns
A pipeline module that adds a coaching overlay to your project.
// Configured here
CoachingOverlay.configure({
animationColor: '#E86FFF',
promptText: 'To generate scale push your phone forward and then pull back',
})
XR8.addCameraPipelineModules([
XR8.GlTextureRenderer.pipelineModule(),
XR8.Threejs.pipelineModule(),
XR8.XrController.pipelineModule(),
XRExtras.FullWindowCanvas.pipelineModule(),
XRExtras.Loading.pipelineModule(),
XRExtras.RuntimeError.pipelineModule(),
LandingPage.pipelineModule(),
// Added here
CoachingOverlay.pipelineModule(),
...
])
Description
FaceController provides face detection and meshing, and interfaces for configuring tracking.
Functions
Function | Description |
---|---|
configure | Configures what processing is performed by FaceController. |
pipelineModule | Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started, camera proessing events, and other state changes. These are used to calculate the camera's position. |
AttachmentPoints | Points on the face you can anchor content to. |
MeshGeometry | Options for defining which portions of the face have mesh triangles returned. |
XR8.FaceController.configure({ nearClip, farClip, meshGeometry, coordinates })
Description
Configures what processing is performed by FaceController.
Parameters
Parameter | Description |
---|---|
nearClip [Optional] | The distance from the camera of the near clip plane. |
farClip [Optional] | The distance from the camera of the far clip plane. |
meshGeometry [Optional] | List that contains which parts of the head geometry are visible. Options are: [XR8.FaceController.MeshGeometry.FACE, XR8.FaceController.MeshGeometry.EYES, XR8.FaceController.MeshGeometry.NOSE,] . The default is [XR8.FaceController.MeshGeometry.FACE] |
coordinates [Optional] | {origin, scale, axes, mirroredDisplay} |
coordinates
[Optional] is an object with the following properties:
Parameter | Description |
---|---|
origin [Optional] | {position: {x, y, z}, rotation: {w, x, y, z}} of the camera. |
scale [Optional] | Scale of the scene. |
axes [Optional] | 'LEFT_HANDED' or 'RIGHT_HANDED' . Default is 'RIGHT_HANDED' |
mirroredDisplay [Optional] | If true, flip left and right in the output. |
IMPORTANT: FaceController and XrController cannot be used at the same time.
XR8.FaceController.configure({
meshGeometry: [XR8.FaceController.MeshGeometry.FACE],
coordinates: {
mirroredDisplay: true,
axes: 'RIGHT_HANDED',
},
})
XR8.FaceController.pipelineModule()
Parameters
None
Description
Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started, camera proessing events, and other state changes. These are used to calculate the camera's position.
Returns
Return value is an object made available to onUpdate as:
processCpuResult.facecontroller: { rotation, position, intrinsics, cameraFeedTexture }
Property | Description |
---|---|
rotation: {w, x, y, z} |
The orientation (quaternion) of the camera in the scene. |
position: {x, y, z} |
The position of the camera in the scene. |
intrinsics | A column-major 4x4 projection matrix that gives the scene camera the same field of view as the rendered camera feed. |
cameraFeedTexture | The WebGLTexture containing camera feed data. |
Dispatched Events
faceloading: Fires when loading begins for additional face AR resources.
faceloading.detail : {maxDetections, pointsPerDetection, indices, uvs}
Property | Description |
---|---|
maxDetections | The maximum number of faces that can be simultaneously processed. |
pointsPerDetection | Number of vertices that will be extracted per face. |
indices: [{a, b, c}] | Indexes into the vertices array that form the triangles of the requested mesh, as specified with meshGeometry on configure. |
uvs: [{u, v}] | uv positions into a texture map corresponding to the returned vertex points. |
facescanning: Fires when all face AR resources have been loaded and scanning has begun.
facescanning.detail : {maxDetections, pointsPerDetection, indices, uvs}
Property | Description |
---|---|
maxDetections | The maximum number of faces that can be simultaneously processed. |
pointsPerDetection | Number of vertices that will be extracted per face. |
indices: [{a, b, c}] | Indexes into the vertices array that form the triangles of the requested mesh, as specified with meshGeometry on configure. |
uvs: [{u, v}] | uv positions into a texture map corresponding to the returned vertex points. |
facefound: Fires when a face first found.
facefound.detail : {id, transform, vertices, normals, attachmentPoints}
Property | Description |
---|---|
id | A numerical id of the located face. |
transform: {position, rotation, scale, scaledWidth, scaledHeight, scaledDepth} |
Transform information of the located face. |
vertices: [{x, y, z}] | Position of face points, relative to transform. |
normals: [{x, y, z}] | Normal direction of vertices, relative to transform. |
attachmentPoints: { name, position: {x,y,z} } | See XR8.FaceController.AttachmentPoints for list of available attachment points. position is relative to the transform. |
transform
is an object with the following properties:
Property | Description |
---|---|
position {x, y, z} | The 3d position of the located face. |
rotation {w, x, y, z} | The 3d local orientation of the located face. |
scale | A scale factor that should be applied to objects attached to this face. |
scaledWidth | Approximate width of the head in the scene when multiplied by scale. |
scaledHeight | Approximate height of the head in the scene when multiplied by scale. |
scaledDepth | Approximate depth of the head in the scene when multiplied by scale. |
faceupdated: Fires when a face is subsequently found.
faceupdated.detail : {id, transform, vertices, normals, attachmentPoints}
Property | Description |
---|---|
id | A numerical id of the located face. |
transform: {position, rotation, scale, scaledWidth, scaledHeight, scaledDepth} |
Transform information of the located face. |
vertices: [{x, y, z}] | Position of face points, relative to transform. |
normals: [{x, y, z}] | Normal direction of vertices, relative to transform. |
attachmentPoints: { name, position: {x,y,z} } | See XR8.FaceController.AttachmentPoints for list of available attachment points. position is relative to the transform. |
transform
is an object with the following properties:
Property | Description |
---|---|
position {x, y, z} | The 3d position of the located face. |
rotation {w, x, y, z} | The 3d local orientation of the located face. |
scale | A scale factor that should be applied to objects attached to this face. |
scaledWidth | Approximate width of the head in the scene when multiplied by scale. |
scaledHeight | Approximate height of the head in the scene when multiplied by scale. |
scaledDepth | Approximate depth of the head in the scene when multiplied by scale. |
facelost: Fires when a face is no longer being tracked.
facelost.detail : { id }
Property | Description |
---|---|
id | A numerical id of the face that was lost. |
XR8.addCameraPipelineModule(XR8.FaceController.pipelineModule())
Enumeration
Description
Points of the face you can anchor content to.
Properties
Property | Value | Description |
---|---|---|
FOREHEAD | forehead | Forehead |
RIGHT_EYEBROW_INNER | rightEyebrowInner | Inner side of right eyebrow |
RIGHT_EYEBROW_MIDDLE | rightEyebrowMiddle | Middle of right eyebrow |
RIGHT_EYEBROW_OUTER | rightEyebrowOuter | Outer side of right eyebrow |
LEFT_EYEBROW_INNER | leftEyebrowInner | Inner side of left eyebrow |
LEFT_EYEBROW_MIDDLE | leftEyebrowMiddle | Middle of left eyebrow |
LEFT_EYEBROW_OUTER | leftEyebrowOuter | Outer side of left eyebrow |
LEFT_EAR | leftEar | Left ear |
RIGHT_EAR | rightEar | Right ear |
LEFT_CHEEK | leftCheek | Left cheek |
RIGHT_CHEEK | rightCheek | Right cheek |
NOSE_BRIDGE | noseBridge | Bridge of the nose |
NOSE_TIP | noseTip | Tip of the nose |
LEFT_EYE | leftEye | Left eye |
RIGHT_EYE | rightEye | Right eye |
LEFT_EYE_OUTER_CORNER | leftEyeOuterCorner | Outer corner of left eye |
RIGHT_EYE_OUTER_CORNER | rightEyeOuterCorner | Outer corner of right eye |
UPPER_LIP | upperLip | Upper lip |
LOWER_LIP | lowerLip | Lower lip |
MOUTH | mouth | Mouth |
MOUTH_RIGHT_CORNER | mouthRightCorner | Right corner of mouth |
MOUTH_LEFT_CORNER | mouthLeftCorner | Left corner of mouth |
CHIN | chin | Chin |
Enumeration
Description
Options for defining which portions of the face have mesh triangles returned.
Properties
Property | Value | Description |
---|---|---|
FACE | face | Return geometry for the face. |
MOUTH | mouth | Return geometry for the mouth. |
EYES | eyes | Return geometry for the eyes. |
Description
Provides a camera pipeline module that draws the camera feed to a canvas as well as extra utilities for GL drawing operations.
Functions
Function | Description |
---|---|
configure | Configures the pipeline module that draws the camera feed to the canvas. |
create | Creates an object for rendering from a texture to a canvas or another texture. |
fillTextureViewport | Convenience method for getting a Viewport struct that fills a texture or canvas from a source without distortion. This is passed to the render method of the object created by GlTextureRenderer.create() |
getGLctxParameters | Gets the current set of WebGL bindings so that they can be restored later. |
pipelineModule | Creates a pipeline module that draws the camera feed to the canvas. |
setGLctxParameters | Restores the WebGL bindings that were saved with getGLctxParameters. |
setTextureProvider | Sets a provider that passes the texture to draw. |
XR8.GlTextureRenderer.configure({ vertexSource, fragmentSource, toTexture, flipY, mirroredDisplay })
Description
Configures the pipeline module that draws the camera feed to the canvas.
Parameters
Parameter | Description |
---|---|
vertexSource [Optional] | The vertex shader source to use for rendering. |
fragmentSource [Optional] | The fragment shader source to use for rendering. |
toTexture [Optional] | A WebGlTexture to draw to. If no texture is provided, drawing will be to the canvas. |
flipY [Optional] | If true, flip the rendering upside-down. |
mirroredDisplay [Optional] | If true, flip the rendering left-right. |
const purpleShader =
// Purple.
` precision mediump float;
varying vec2 texUv;
uniform sampler2D sampler;
void main() {
vec4 c = texture2D(sampler, texUv);
float y = dot(c.rgb, vec3(0.299, 0.587, 0.114));
vec3 p = vec3(.463, .067, .712);
vec3 p1 = vec3(1.0, 1.0, 1.0) - p;
vec3 rgb = y < .25 ? (y * 4.0) * p : ((y - .25) * 1.333) * p1 + p;
gl_FragColor = vec4(rgb, c.a);
}`
XR8.GlTextureRenderer.configure({fragmentSource: purpleShader})
XR8.GlTextureRenderer.create({ GLctx, vertexSource, fragmentSource, toTexture, flipY, mirroredDisplay })
Description
Creates an object for rendering from a texture to a canvas or another texture.
Parameters
Parameter | Description |
---|---|
GLctx | The WebGlRenderingContext (or WebGl2RenderingContext) to use for rendering. If no toTexture is specified, content will be drawn to this context's canvas. |
vertexSource [Optional] | The vertex shader source to use for rendering. |
fragmentSource [Optional] | The fragment shader source to use for rendering. |
toTexture [Optional] | A WebGlTexture to draw to. If no texture is provided, drawing will be to the canvas. |
flipY [Optional] | If true, flip the rendering upside-down. |
mirroredDisplay [Optional] | If true, flip the rendering left-right. |
Returns
Returns an object: {render, destroy, shader}
Property | Description |
---|---|
render({ renderTexture, viewport }) | A function that renders the renderTexture to the specified viewport. Depending on if toTexture is supplied, the viewport is either on the canvas that created GLctx, or it's relative to the render texture provided. |
destroy | Clean up resources associated with this GlTextureRenderer. |
shader | Gets a handle to the shader being used to draw the texture. |
The render
function has the following parameters:
Parameter | Description |
---|---|
renderTexture | A WebGlTexture (source) to draw. |
viewport | The region of the canvas or output texture to draw to; this can be constructed manually, or using GlTextureRenderer.fillTextureViewport(). |
The viewport is specified by { width, height, offsetX, offsetY }
:
Property | Description |
---|---|
width | The width (in pixels) to draw. |
height | The height (in pixels) to draw. |
offsetX [Optional] | The minimum x-coordinate (in pixels) to draw to. |
offsetY [Optional] | The minimum y-coordinate (in pixels) to draw to. |
XR8.GlTextureRenderer.fillTextureViewport(srcWidth, srcHeight, destWidth, destHeight)
Description
Convenience method for getting a Viewport struct that fills a texture or canvas from a source without distortion. This is passed to the render method of the object created by GlTextureRenderer.create()
Parameters
Parameter | Description |
---|---|
srcWidth | The width of the texture you are rendering. |
srcHeight | The height of the texture you are rendering. |
destWidth | The width of the render target. |
destHeight | The height of the render target. |
Returns
An object: { width, height, offsetX, offsetY }
Property | Description |
---|---|
width | The width (in pixels) to draw. |
height | The height (in pixels) to draw. |
offsetX | The minimum x-coordinate (in pixels) to draw to. |
offsetY | The minimum y-coordinate (in pixels) to draw to. |
XR8.GlTextureRenderer.getGLctxParameters(GLctx, textureUnit)
Description
Gets the current set of WebGL bindings so that they can be restored later.
Parameters
Parameter | Description |
---|---|
GLctx | The WebGLRenderingContext or WebGL2RenderingContext to get bindings from. |
textureunits | The texture units to preserve state for, e.g. [GLctx.TEXTURE0] |
Returns
A struct to pass to setGLctxParameters.
const restoreParams = XR8.GlTextureRenderer.getGLctxParameters(GLctx, [GLctx.TEXTURE0])
// Alter context parameters as needed
...
XR8.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)
// Context parameters are restored to their previous state
XR8.GlTextureRenderer.pipelineModule({ vertexSource, fragmentSource, toTexture, flipY })
Description
Creates a pipeline module that draws the camera feed to the canvas.
Parameters
Parameter | Description |
---|---|
vertexSource [Optional] | The vertex shader source to use for rendering. |
fragmentSource [Optional] | The fragment shader source to use for rendering. |
toTexture [Optional] | A WebGlTexture to draw to. If no texture is provided, drawing will be to the canvas. |
flipY [Optional] | If true, flip the rendering upside-down. |
Returns
Return value is an object {viewport, shader}
made available to onProcessCpu and onUpdate as:
processGpuResult.gltexturerenderer
with the following properties:
Property | Description |
---|---|
viewport | The region of the canvas or output texture to draw to; this can be constructed manually, or using GlTextureRenderer.fillTextureViewport(). |
shader | A handle to the shader being used to draw the texture. |
processGpuResult.gltexturerenderer.viewport: { width, height, offsetX, offsetY }
Property | Description |
---|---|
width | The width (in pixels) to draw. |
height | The height (in pixels) to draw. |
offsetX | The minimum x-coordinate (in pixels) to draw to. |
offsetY | The minimum y-coordinate (in pixels) to draw to. |
XR8.addCameraPipelineModule(XR8.GlTextureRenderer.pipelineModule())
XR8.addCameraPipelineModule({
name: 'mycamerapipelinemodule',
onProcessCpu: ({ processGpuResult }) => {
const {viewport, shader} = processGpuResult.gltexturerenderer
if (!viewport) {
return
}
const { width, height, offsetX, offsetY } = viewport
// ...
},
XR8.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)
Description
Restores the WebGL bindings that were saved with getGLctxParameters.
Parameters
Parameter | Description |
---|---|
GLctx | The WebGLRenderingContext or WebGL2RenderingContext to restore bindings on. |
restoreParams | The output of getGLctxParameters. |
const restoreParams = XR8.GlTextureRenderer.getGLctxParameters(GLctx, [GLctx.TEXTURE0])
// Alter context parameters as needed
...
XR8.GlTextureRenderer.setGLctxParameters(GLctx, restoreParams)
// Context parameters are restored to their previous state
XR8.GlTextureRenderer.setTextureProvider(({ frameStartResult, processGpuResult, processCpuResult }) => {} )
Description
Sets a provider that passes the texture to draw. This should be a function that take the same inputs as cameraPipelineModule.onUpdate.
Parameters
setTextureProvider()
takes a function with the following parameters:
Parameter | Description |
---|---|
frameStartResult | The data that was provided at the beginning of a frame. |
processGpuResult | Data returned by all installed modules during onProcessGpu. |
processCpuResult | Data returned by all installed modules during onProcessCpu. |
XR8.GlTextureRenderer.setTextureProvider(
({processGpuResult}) => {
return processGpuResult.camerapixelarray ? processGpuResult.camerapixelarray.srcTex : null
})
Description
LayersController provides semantic layer detection and interfaces for configuring layer rendering.
Functions
Function | Description |
---|---|
configure | Configures what processing is performed by LayersController. |
getLayerNames | Returns the layers that are configured by the LayersController. |
pipelineModule | Creates a camera pipeline module that, when installed, provides semantic layer detection. |
recenter | Repositions the camera to the origin / facing direction. |
XR8.LayersController.configure({ nearClip, farClip, coordinates, layers })
Description
Configures the processing performed by LayersController.
Parameters
Parameter | Description |
---|---|
nearClip [Optional] | The closest distance to the camera at which scene objects are visible. |
farClip [Optional] | The farthest distance to the camera at which scene objects are visible. |
coordinates [Optional] | Camera configuration: {origin, scale, axes, mirroredDisplay} |
layers [Optional] | Semantic layers to detect. To remove a layer pass null as the layer value. To reset a layer option to it's default, value pass null for that option value. Only supported layer at this time is sky . |
layers
[Optional] is an nullable object with the following properties:
Parameter | Description |
---|---|
layerName [Optional] | Semantic layers to detect. Only supported layer at this time is sky . |
invertLayerMask [Optional] | If true , content you place in your scene will occlude non-sky areas. If false , content you place in your scene will occlude sky areas. Default is false . |
IMPORTANT: LayersController cannot be used at the same time as FaceController and XrController
XR8.LayersController.configure({layers: {sky: {invertLayerMask: false}}})
XR8.LayersController.getLayerNames()
Parameters
None
Description
Returns the layers that are configured by the LayersController.
XR8.LayersController.pipelineModule()
Parameters
None
Description
Creates a camera pipeline module that, when installed, provides semantic layer detection.
Returns
Return value is an object made available to onUpdate as:
processCpuResult.facecontroller: { rotation, position, intrinsics, cameraFeedTexture, layers }
Property | Description |
---|---|
rotation: {w, x, y, z} |
The orientation (quaternion) of the camera in the scene. |
position: {x, y, z} |
The position of the camera in the scene. |
intrinsics | A column-major 4x4 projection matrix that gives the scene camera the same field of view as the rendered camera feed. |
cameraFeedTexture | The WebGLTexture containing camera feed data. |
layers | {LayerName, LayerOutput} |
LayerOutput
is an object with the following properties:
Parameter | Description |
---|---|
texture | The WebGLTexture containing layer data. The r, g, b channels indicate our confidence of whether the layer is present at this pixel. 0.0 indicates the layer is not present and 1.0 indicates it is present. Note that this value will be flipped if invertLayerMask has been set to true. |
textureWidth | Width of the returned texture in pixels. |
textureHeight | Height of the returned texture in pixels. |
percentage | Percentage of pixels that are classified as associated with the layer. Value in the range of [0, 1] |
Dispatched Events
layerloading: Fires when loading begins for additional layer segmentation resources.
layerloading.detail : {}
layerscanning: Fires when all layer segmentation resources have been loaded and scanning has begun. One event is dispatched per layer being scanned.
layerscanning.detail : {name}
Property | Description |
---|---|
name | Name of the layer which we are scanning. |
layerfound: Fires the first time a layer has been found.
layerfound.detail : {name, percentage}
Property | Description |
---|---|
name | Name of the layer that has been found. |
percentage | Percentage of pixels that are associated with the layer. |
XR8.addCameraPipelineModule(XR8.LayersController.pipelineModule())
XR8.LayersController.recenter()
Parameters
None
Description
Repositions the camera to the origin / facing direction.
Description
Provides a module generates a custom Landing Page for your Web AR experience.
Functions
Function | Description |
---|---|
configure | Configures LandingPage settings. |
pipelineModule | Creates a camera pipeline module that, when installed, adds landing page functionality to your project. |
LandingPage.configure({ logoSrc, logoAlt, promptPrefix, url, promptSuffix, textColor, font, textShadow, backgroundSrc, backgroundBlur, backgroundColor, mediaSrc, mediaAlt, mediaAutoplay, mediaAnimation, mediaControls, sceneEnvMap, sceneOrbitIdle, sceneOrbitInteraction, sceneLightingIntensity, vrPromptPrefix })
Description
Configures behavior and look of the LandingPage module.
Parameters (All Optional)
Parameter | Type | Default | Description |
---|---|---|---|
logoSrc | String | Image source for brand logo image. | |
logoAlt | String | "Logo" | Alt text for brand logo image. |
promptPrefix | String | "Scan or visit" | Sets the text string for call to action before the URL for the experience is displayed. |
url | String | 8th.io link if 8th Wall hosted, or current page | Sets the displayed URL and QR code. |
promptSuffix | String | "to continue" | Sets the text string for call to action after the URL for the experience is displayed. |
textColor | Hex Color | "#ffffff" | Color of all the text on the Landing Page. |
font | String | "'Nunito', sans-serif" | Font of all text on the Landing Page. This parameter accepts valid CSS font-family arguments. |
textShadow | Bool | false | Sets text-shadow property for all text on the Landing Page. |
backgroundSrc | String | Image source for background image. | |
backgroundBlur | Number | 0.0 | Applies a blur effect to the backgroundSrc if one is specified. (Typically values are between 0.0 and 1.0) |
backgroundColor | String | linear-gradient(#464766,#2D2E43) | Background color of the Landing Page. This parameter accepts valid CSS background-color arguments. Background color is not displayed if a background-src or sceneEnvMap is set. |
mediaSrc | String | App’s cover image, if present | Media source (3D model, image, or video) for landing page hero content. Accepted media sources include a-asset-item id, or URL. |
mediaAlt | String | "Preview" | Alt text for landing page image content. |
mediaAutoplay | Bool | true | If the mediaSrc is a video, specifies if the video should be played on load with sound muted. |
mediaAnimation | String | [First animation clip of model if present] | If the mediaSrc is a 3D model, specify whether to play a specific animation clip associated with the model, or "none". |
mediaControls | String | "minimal" | If mediaSrc is a video, specify media controls displayed to to user. Choose from "none", "mininal" or "browser" (browser defaults) |
sceneEnvMap | String | "field" | Image source pointing to an equirectangular image. Or one of the following preset environments: "field", "hill", "city", "pastel", or "space". |
sceneOrbitIdle | String | "spin" | If the mediaSrc is a 3D model, specify whether the model should "spin", or "none". |
sceneOrbitInteraction | String | "drag" | If the mediaSrc is a 3D model, specify whether the user can interact with the orbit controls, choose "drag", or "none". |
sceneLightingIntensity | Number | 1.0 | If the mediaSrc is a 3D model, specify the strength of the light illuminating the mode. |
vrPromptPrefix | String | "or visit" | Sets the text string for call to action before the URL for the experience is displayed on VR headsets. |
LandingPage.configure({
mediaSrc: 'https://www.mydomain.com/bat.glb',
sceneEnvMap: 'hill',
})
LandingPage.pipelineModule()
Description
Creates a pipeline module that, when installed, adds landing page functionality to your project.
Parameters
None
Returns
A pipeline module that adds landing page functionality to your project.
// Configured here
LandingPage.configure({
mediaSrc: 'https://domain.com/bat.glb',
sceneEnvMap: 'hill',
})
XR8.addCameraPipelineModules([
XR8.GlTextureRenderer.pipelineModule(),
XR8.Threejs.pipelineModule(),
XR8.XrController.pipelineModule(),
XRExtras.FullWindowCanvas.pipelineModule(),
XRExtras.Loading.pipelineModule(),
XRExtras.RuntimeError.pipelineModule(),
// Added here
LandingPage.pipelineModule(),
...
])
Description
Provides a camera pipeline module that allows you to record a video in MP4 format.
Functions
Function | Description |
---|---|
configure | Configure video recording settings. |
pipelineModule | Creates a pipeline module that records video in MP4 format. |
recordVideo | Start recording. |
requestMicrophone | Enables recording of audio (if not enabled automatically), requesting permissions if needed. |
stopRecording | Stop recording. |
RequestMicOptions | Enum for whether or not to automatically request microphone permissions. |
XR8.MediaRecorder.configure({ coverImageUrl, enableEndCard, endCardCallToAction, footerImageUrl, foregroundCanvas, maxDurationMs, maxDimension, shortLink, configureAudioOutput, audioContext, requestMic })
Description
Configures various MediaRecorder parameters.
Parameters
Parameter | Default | Description |
---|---|---|
coverImageUrl [Optional] | cover image configured in project, null otherwise | Image source for cover image. |
enableEndCard [Optional] | false | If true, enable end card. |
endCardCallToAction [Optional] | 'Try it at: ' | Sets the text string for call to action. |
footerImageUrl [Optional] | null | img src for cover image. |
foregroundCanvas [Optional] | null | The canvas to use as a foreground in the recorded video. |
maxDurationMs [Optional] | 15000 | Maximum duration of video, in milliseconds. |
maxDimension [Optional] | 1280 | Max dimension of the captured recording, in pixels. |
shortLink [Optional] | 8th.io shortlink from project dashboard | Sets the text string for shortlink. |
configureAudioOutput [Optional] | null | User provided function that will receive the microphoneInput and audioProcessor audio nodes for complete control of the recording's audio. The nodes attached to the audio processor node will be part of the recording's audio. It is required to return the end node of the user's audio graph. |
audioContext [Optional] | null | User provided AudioContext instance. Engines like THREE.js and BABYLON.js have their own internal audio instance. In order for the recordings to contains sounds defined in those engines, you'll want to provide their AudioContext instance. |
requestMic [Optional] | 'auto' | Determines when the audio permissions are requested. The options are provided in XR8.MediaRecorder.RequestMicOptions. |
The function passed to configureAudioOutput
takes an object with the following parameters:
Parameter | Description |
---|---|
microphoneInput | A GainNode that contains the user’s mic input. If the user’s permissions are not accepted, then this node won’t output the mic input but will still be present. |
audioProcessor | a ScriptProcessorNode that passes audio data to the recorder. If you want an audio node to be part of the recording’s audio output, then you must connect it to the audioProcessor. |
XR8.MediaRecorder.configure({
maxDurationMs: 15000,
enableEndCard: true,
endCardCallToAction: 'Try it at:',
shortLink: '8th.io/my-link',
})
const userConfiguredAudioOutput = ({microphoneInput, audioProcessor}) => {
const myCustomAudioGraph = ...
myCustomAudioSource.connect(myCustomAudioGraph)
microphoneInput.connect(myCustomAudioGraph)
// connect audio graph end node to hardware
myCustomAudioGraph.connect(microphoneInput.context.destination)
// audio graph will be automatically connected to processor
return myCustomAudioGraph
}
const threejsAudioContext = THREE.AudioContext.getContext()
XR8.MediaRecorder.configure({
configureAudioOutput: userConfiguredAudioOutput,
audioContext: threejsAudioContext,
requestMic: XR8.MediaRecorder.RequestMicOptions.AUTO,
})
XR8.MediaRecorder.pipelineModule()
Description
Provides a camera pipeline module that allows you to record a video in MP4 format.
Parameters
None
Returns
A MediaRecorder pipeline module module allows you to record a video.
XR8.addCameraPipelineModule(XR8.MediaRecorder.pipelineModule())
XR8.MediaRecorder.recordVideo({ onError, onProcessFrame, onStart, onStop, onVideoReady })
Description
Start recording.
This function takes an object that implements one of more of the following media recorder licecycle callback methods:
Parameters
Parameter | Description |
---|---|
onError | Callback when there is an error. |
onProcessFrame | Callback for adding an overlay to the video. |
onStart | Callback when recording has started. |
onStop | Callback when recording has stopped. |
onPreviewReady | Callback when a previewable, but not sharing-optimized, video is ready (Android/Desktop only) |
onFinalizeProgress | Callback when the media recorder is making progress in the final export (Android/Desktop only) |
onVideoReady | Callback when recording has completed and video is ready. |
Note: When the browser has native MediaRecorder support for webm and not mp4 (currently Android/Desktop), the webm is usable as a preview video, but is converted to mp4 to generate the final video. onPreviewReady
is called when the conversion starts, to allow the user to see the video immediately, and when the mp4 file is ready, onVideoReady
will be called. During conversion, onFinalizeProgress
is called periodically to allow a progress bar to be displayed.
XR8.MediaRecorder.recordVideo({
onVideoReady: (result) => window.dispatchEvent(new CustomEvent('recordercomplete', {detail: result})),
onStop: () => showLoading(),
onError: () => clearState(),
onProcessFrame: ({elapsedTimeMs, maxRecordingMs, ctx}) => {
// overlay some red text over the video
ctx.fillStyle = 'red'
ctx.font = '50px "Nunito"'
ctx.fillText(`${elapsedTimeMs}/${maxRecordingMs}`, 50, 50)
const timeLeft = ( 1 - elapsedTimeMs / maxRecordingMs)
// update the progress bar to show how much time is left
progressBar.style.strokeDashoffset = `${100 * timeLeft }`
},
onFinalizeProgress: ({progress, total}) => {
console.log('Export is ' + Math.round(progress / total) + '% complete')
},
})
XR8.MediaRecorder.requestMicrophone()
Description
Enables recording of audio (if not enabled automatically), requesting permissions if needed.
Returns a promise that lets the client know when the stream is ready. If you begin recording before the audio stream is ready, then you may miss the user's microphone output at the beginning of the recording.
Parameters
None
XR8.MediaRecorder.requestMicrophone()
.then(() => {
console.log('Microphone requested!')
})
.catch((err) => {
console.log('Hit an error: ', err)
})
XR8.MediaRecorder.stopRecording()
Description
Stop recording.
Parameters
None
XR8.MediaRecorder.stopRecording()
Enumeration
Description
Determines when the audio permissions are requested.
Properties
Property | Value | Description |
---|---|---|
AUTO | auto | Automatically request microphone permissions in onAttach(). |
MANUAL | manual | Microphone permissions are NOT requested in onAttach(). Any other audio added to the app is still recorded if added to the AudioContext and connected to the audioProcessor provided to the user's configureAudioOutput function passed to XR8.MediaRecorder.configure(). You can request microphone permissions manually by calling XR8.MediaRecorder.requestMicrophone(). |
PlayCanvas (https://www.playcanvas.com/) is an open-source 3D game engine/interactive 3D application engine alongside a proprietary cloud-hosted creation platform that allows for simultaneous editing from multiple computers via a browser-based interface.
Description
Provides an integration that interfaces with the PlayCanvas environment and lifecyle to drive the PlayCanvas camera to do virtual overlays.
Functions
Function | Description |
---|---|
runXr | Opens the camera and starts running World Tracking and/or Image Tracking in a playcanvas scene. |
runFaceEffects | Opens the camera and starts running Face Effects in a playcanvas scene. |
stopXr | Remove the modules added in runXr and stop the camera. |
stopFaceEffects | Remove the modules added in runFaceEffects and stop the camera. |
To get started go to https://playcanvas.com/the8thwall and fork one of our sample projects:
AR World Tracking Starter Kit: An application to get you started quickly creating WebAR world tracking applications in PlayCanvas.
AR Image Tracking Starter Kit: An application to get you started quickly creating WebAR image tracking applications in PlayCanvas.
AR Face Effects Starter Kit: An application to get you started quickly creating Face Effects WebAR applications in PlayCanvas.
World Tracking and Face Effects: An example that illustrates how to switch between World Tracking and Face Effects in a single project.
Add your App Key
Go to Settings -> External Scripts
The following two scripts should be added added:
https://cdn.8thwall.com/web/xrextras/xrextras.js
https://apps.8thwall.com/xrweb?appKey=XXXXXX
(Note: replace the X's with your own unique App Key obtained from the 8th Wall Console.
Enable "Transparent Canvas"
Go to Settings -> Rendering
Make sure that "Transparent Canvas" is checked
Disable "Prefer WebGL 2.0"
Go to Settings -> Rendering
Make sure that "Prefer WebGL 2.0" is unchecked
Add XRController
NOTE: Only for SLAM and/or Image Target projects. FaceController and XrController cannot be used simultaneously.
The 8th Wall sample PlayCanvas projects are populated with an XRController game object. If you are starting with a blank project, download xrcontroller.js
from https://www.github.com/8thwall/web/tree/master/gettingstarted/playcanvas/scripts/ and attach to an Entity in your scene.
Options:
Option | Description |
---|---|
disableWorldTracking | If true, turn off SLAM tracking for efficiency. |
shadowmaterial | Material which you want to use as a transparent shadow receiver (e.g. for ground shadows). Typically this material will be used on a "ground" plane entity positioned at (0,0,0) |
Add FaceController
NOTE: Only for Face Effects projects. FaceController and XrController cannot be used simultaneously.
The 8th Wall sample PlayCanvas projects are populated with a FaceController game object. If you are starting with a blank project, download facecontroller.js
from https://www.github.com/8thwall/web/tree/master/gettingstarted/playcanvas/scripts/ and attach to an Entity in your scene.
Option | Description |
---|---|
headAnchor | The entity to anchor to the root of the head in world space. |
XR8.PlayCanvas.runXr( {pcCamera, pcApp}, [extraModules], config )
Description
Opens the camera and starts running XR World Tracking and/or Image Targets in a playcanvas scene.
Parameters
Parameter | Description |
---|---|
pcCamera | The playcanvas scene camera to drive with AR. |
pcApp | The playcanvas app, typically this.app . |
extraModules [Optional] | An optional array of extra pipeline modules to install. |
config | Configuration parameters to pass to XR8.run() |
config
is an object with the following properties:
Property | Type | Default | Description |
---|---|---|---|
canvas | HTMLCanvasElement | The HTML Canvas that the camera feed will be drawn to. Typically this is 'application-canvas'. | |
webgl2 [Optional] | bool | false | If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1. |
ownRunLoop [Optional] | bool | false | If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only] |
cameraConfig: {direction} [Optional] | object | {direction: XR8.XrConfig.camera().BACK} |
Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT |
glContextConfig [Optional] | WebGLContextAttributes | null | The attributes to configure the WebGL canvas context. |
allowedDevices [Optional] | XR8.XrConfig.device() | XR8.XrConfig.device().MOBILE |
Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY , always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE . |
var xrcontroller = pc.createScript('xrcontroller')
// Optionally, world tracking can be disabled to increase efficiency when tracking image targets.
xrcontroller.attributes.add('disableWorldTracking', {type: 'boolean'})
xrcontroller.prototype.initialize = function() {
const disableWorldTracking = this.disableWorldTracking
// After XR has fully loaded, open the camera feed and start displaying AR.
const runOnLoad = ({pcCamera, pcApp}, extramodules) => () => {
XR8.xrController().configure({disableWorldTracking})
// Pass in your canvas name. Typically this is 'application-canvas'.
const config = {canvas: document.getElementById('application-canvas') }
XR8.PlayCanvas.runXr({pcCamera, pcApp}, extraModules, config)
}
// Find the camera in the playcanvas scene, and tie it to the motion of the user's phone in the
// world.
const pcCamera = XRExtras.PlayCanvas.findOneCamera(this.entity)
// While XR is still loading, show some helpful things.
// Almost There: Detects whether the user's environment can support web ar, and if it doesn't,
// shows hints for how to view the experience.
// Loading: shows prompts for camera permission and hides the scene until it's ready for display.
// Runtime Error: If something unexpected goes wrong, display an error screen.
XRExtras.Loading.showLoading({onxrloaded: runOnLoad({pcCamera, pcApp: this.app}, [
// Optional modules that developers may wish to customize or theme.
XRExtras.AlmostThere.pipelineModule(), // Detects unsupported browsers and gives hints.
XRExtras.Loading.pipelineModule(), // Manages the loading screen on startup.
XRExtras.RuntimeError.pipelineModule(), // Shows an error image on runtime error.
])})
}
XR8.PlayCanvas.runFaceEffects( {pcCamera, pcApp}, [extraModules], config )
Description
Opens the camera and starts running XR World Tracking and/or Image Targets in a playcanvas scene.
Parameters
Parameter | Description |
---|---|
pcCamera | The playcanvas scene camera to drive with AR. |
pcApp | The playcanvas app, typically this.app . |
extraModules [Optional] | An optional array of extra pipeline modules to install. |
config | Configuration parameters to pass to XR8.run() |
config
is an object with the following properties:
Property | Type | Default | Description |
---|---|---|---|
canvas | HTMLCanvasElement | The HTML Canvas that the camera feed will be drawn to. Typically this is 'application-canvas'. | |
webgl2 [Optional] | bool | false | If true, use WebGL2 if available, otherwise fallback to WebGL1. If false, always use WebGL1. |
ownRunLoop [Optional] | bool | false | If true, XR should use it's own run loop. If false, you will provide your own run loop and be responsible for calling runPreRender and runPostRender yourself [Advanced Users only] |
cameraConfig: {direction} [Optional] | object | {direction: XR8.XrConfig.camera().BACK} |
Desired camera to use. Supported values for direction are XR8.XrConfig.camera().BACK or XR8.XrConfig.camera().FRONT |
glContextConfig [Optional] | WebGLContextAttributes | null | The attributes to configure the WebGL canvas context. |
allowedDevices [Optional] | XR8.XrConfig.device() | XR8.XrConfig.device().MOBILE |
Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY , always open the camera. Note that world tracking can only be used with XR8.XrConfig.device().MOBILE . |
XR8.PlayCanvas.stopXr()
Description
Remove the modules added in runXr() and stop the camera.
Parameters
None.
XR8.PlayCanvas.stopFaceEffects()
Description
Remove the modules added in runFaceEffects() and stop the camera.
Parameters
None.
This section describes the events fired by 8th Wall in a PlayCanvas environment.
You can listen for these events in your web application.
Events Emitted
Event Emitted | Description |
---|---|
xr:camerastatuschange | This event is emitted when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status. |
xr:realityerror | This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed. |
xr:realityready | This event is emitted when 8th Wall Web has initialized and at least one frame has been successfully processed. This is the recommended time at which any loading elements should be hidden. |
xr:screenshoterror | This event is emitted in response to the screenshotrequest resulting in an error. |
xr:screenshotready | This event is emitted in response to the screenshotrequest event being being completed successfully. The JPEG compressed image of the AFrame canvas will be provided. |
XrController Events Emitted
Event Emitted | Description |
---|---|
xr:imageloading | This event is emitted when detection image loading begins. |
xr:imagescanning | This event is emitted when all detection images have been loaded and scanning has begun. |
xr:imagefound | This event is emitted when an image target is first found. |
xr:imageupdated | This event is emitted when an image target changes position, rotation or scale. |
xr:imagelost | This event is emitted when an image target is no longer being tracked. |
xr:meshfound | This event is emitted when a mesh is first found either after start or after a recenter(). |
xr:meshupdated | This event is emitted when the first mesh found changes position or rotation. |
xr:meshlost | This event is emitted when recenter() is called. |
xr:projectwayspotscanning | This event is emitted when all Project Wayspots have been loaded for scanning. |
xr:projectwayspotfound | This event is emitted when a Project Wayspot is first found. |
xr:projectwayspotupdated | This event is emitted when a Project Wayspot changes position or rotation. |
xr:projectwayspotlost | This event is emitted when a Project Wayspot is no longer being tracked. |
FaceController Events Emitted
Event Emitted | Description |
---|---|
xr:faceloading | Fires when loading begins for additional face AR resources. |
xr:facescanning | Fires when all face AR resources have been loaded and scanning has begun. |
xr:facefound | Fires when a face is first found. |
xr:faceupdated | Fires when a face is subsequently found. |
xr:facelost | Fires when a face is no longer being tracked. |
Description
This event is fired when the status of the camera changes. See onCameraStatusChange from XR8.addCameraPipelineModule for more information on the possible status.
const handleCameraStatusChange = function handleCameraStatusChange(detail) {
console.log('status change', detail.status);
switch (detail.status) {
case 'requesting':
// Do something
break;
case 'hasStream':
// Do something
break;
case 'failed':
this.app.fire('xr:realityerror');
break;
}
}
this.app.on('xr:camerastatuschange', handleCameraStatusChange, this)
Description
This event is emitted when a mesh is first found either after start or after a recenter().
xr:meshfound.detail : { id, position, rotation, mesh }
Property | Description |
---|---|
id | An id for this mesh that is stable within a session |
position: {x, y, z} |
The 3d position of the located mesh. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located mesh. |
mesh: pc.Mesh() |
A PlayCanvas mesh with index, position, and color attributes. |
Description
This event is emitted when the first mesh found changes position or rotation.
xr:meshupdated.detail : { id, position, rotation }
Property | Description |
---|---|
id | An id for this mesh that is stable within a session |
position: {x, y, z} |
The 3d position of the located mesh. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located mesh. |
Description
This event is emitted when recenter() is called.
xr:meshlost.detail : { id }
Property | Description |
---|---|
id | An id for this mesh that is stable within a session. |
Description
This event is emitted when all Project Wayspots have been loaded for scanning.
xr:projectwayspotfound.detail : { wayspots: [] }
Property | Description |
---|---|
wayspots: [] | An array objects containing Wayspot information. |
wayspots
is an array of objects with the following properties:
Property | Description |
---|---|
id | An id for this Project Wayspot that is stable within a session |
name | Project Wayspot name. |
imageUrl | URL to a representative image for this Project Wayspot. |
title | Project Wayspot title. |
lat | Latitude of this Project Wayspot. |
lng | Longitude of this Project Wayspot. |
Description
This event is emitted when a Project Wayspot is first found.
xr:projectwayspotfound.detail : { name, position, rotation }
Property | Description |
---|---|
name | The Project Wayspot name. |
position: {x, y, z} |
The 3d position of the located Project Wayspot. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located Project Wayspot. |
Description
This event is emitted when a Project Wayspot changes position or rotation.
xr:projectwayspotupdated.detail : { name, position, rotation }
Property | Description |
---|---|
name | The Project Wayspot name. |
position: {x, y, z} |
The 3d position of the located Project Wayspot. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located Project Wayspot. |
Description
This event is emitted when a Project Wayspot is no longer being tracked.
xr:projectwayspotlost.detail : { name, position, rotation }
Property | Description |
---|---|
name | The Project Wayspot name. |
position: {x, y, z} |
The 3d position of the located Project Wayspot. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located Project Wayspot. |
Description
This event is emitted when an error has occured when initializing 8th Wall Web. This is the recommended time at which any error messages should be displayed. The XrDevice API can help with determining what type of error messaging should be displayed.
this.app.on('xr:realityerror', ({error, isDeviceBrowserSupported, compatibility}) => {
if (detail.isDeviceBrowserSupported) {
// Browser is compatible. Print the exception for more information.
console.log(error)
return
}
// Browser is not compatible. Check the reasons why it may not be in `compatibility`
console.log(compatibility)
}, this)
Description
This event is fired when 8th Wall Web has initialized and at least one frame has been successfully processed.
this.app.on('xr:realityready', () => {
// Hide loading UI
}, this)
Description
This event is emitted in response to the xr:screenshotrequest resulting in an error.
this.app.on('xr:screenshoterror', (detail) => {
console.log(detail)
// Handle screenshot error.
}, this)
Description
This event is emitted in response to the xr:screenshotrequest event being being completed successfully. The JPEG compressed image of the PlayCanvas canvas will be provided.
this.app.on('xr:screenshotready', (event) => {
// screenshotPreview is an <img> HTML element
const image = document.getElementById('screenshotPreview')
image.src = 'data:image/jpeg;base64,' + event.detail
}, this)
Image target events can be listened to as this.app.on(event, handler, this)
.
xr:imageloading: Fires when detection image loading begins.
xr:imageloading : { imageTargets: {name, type, metadata} }
xr:imagescanning: Fires when all detection images have been loaded and scanning has begun.
xr:imagescanning : { imageTargets: {name, type, metadata, geometry} }
xr:imagefound: Fires when an image target is first found.
xr:imagefound : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
xr:imageupdated: Fires when an image target changes position, rotation or scale.
xr:imageupdated : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
xr:imagelost: Fires when an image target is no longer being tracked.
xr:imagelost : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
const showImage = (detail) => {
if (name != detail.name) { return }
const {rotation, position, scale} = detail
entity.setRotation(rotation.x, rotation.y, rotation.z, rotation.w)
entity.setPosition(position.x, position.y, position.z)
entity.setLocalScale(scale, scale, scale)
entity.enabled = true
}
const hideImage = (detail) => {
if (name != detail.name) { return }
entity.enabled = false
}
this.app.on('xr:imagefound', showImage, {})
this.app.on('xr:imageupdated', showImage, {})
this.app.on('xr:imagelost', hideImage, {})
Face Effects events can be listened to as this.app.on(event, handler, this)
.
xr:faceloading: Fires when loading begins for additional face AR resources.
xr:faceloading : {maxDetections, pointsPerDetection, indices, uvs}
xr:facescanning: Fires when all face AR resources have been loaded and scanning has begun.
xr:facescanning: {maxDetections, pointsPerDetection, indices, uvs}
xr:facefound: Fires when a face is first found.
xr:facefound : {id, transform, attachmentPoints, vertices, normals}
xr:faceupdated: Fires when a face is subsequently found.
xr:faceupdated : {id, transform, attachmentPoints, vertices, normals}
xr:facelost: Fires when a face is no longer being tracked.
xr:facelost : {id}
let mesh = null
// Fires when loading begins for additional face AR resources.
this.app.on('xr:faceloading', ({maxDetections, pointsPerDetection, indices, uvs}) => {
const node = new pc.GraphNode();
const material = this.material.resource;
mesh = pc.createMesh(
this.app.graphicsDevice,
new Array(pointsPerDetection * 3).fill(0.0), // setting filler vertex positions
{
uvs: uvs.map((uv) => [uv.u, uv.v]).flat(),
indices: indices.map((i) => [i.a, i.b, i.c]).flat()
}
);
const meshInstance = new pc.MeshInstance(node, mesh, material);
const model = new pc.Model();
model.graph = node;
model.meshInstances.push(meshInstance);
this.entity.model.model = model;
}, {})
// Fires when a face is subsequently found.
this.app.on('xr:faceupdated', ({id, transform, attachmentPoints, vertices, normals}) => {
const {position, rotation, scale, scaledDepth, scaledHeight, scaledWidth} = transform
this.entity.setPosition(position.x, position.y, position.z);
this.entity.setLocalScale(scale, scale, scale)
this.entity.setRotation(rotation.x, rotation.y, rotation.z, rotation.w)
// Set mesh vertices in local space
mesh.setPositions(vertices.map((vertexPos) => [vertexPos.x, vertexPos.y, vertexPos.z]).flat())
// Set vertex normals
mesh.setNormals(normals.map((normal) => [normal.x, normal.y, normal.z]).flat())
mesh.update()
}, {})
This section describes the events that are listened for by 8th Wall Web in a PlayCanvas environment.
You can fire these events in your web application to perform various actions:
Event Listener | Description |
---|---|
xr:hidecamerafeed | Hides the camera feed. Tracking does not stop. |
xr:recenter | Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter. |
xr:screenshotrequest | Emits a request to the engine to capture a screenshot of the PlayCanvas canvas. The engine will emit a xr:screenshotready event with the JPEG compressed image or xr:screenshoterror if an error has occured. |
xr:showcamerafeed | Shows the camera feed. |
xr:stopxr | Stop the current XR session. While stopped, the camera feed is stopped and device motion is not tracked. |
this.app.fire('xr:hidecamerafeed')
Parameters
None
Description
Hides the camera feed. Tracking does not stop.
this.app.fire('xr:hidecamerafeed')
this.app.fire('xr:recenter')
Description
Recenters the camera feed to its origin. If a new origin is provided as an argument, the camera's origin will be reset to that, then it will recenter.
Parameters
Parameter | Description |
---|---|
origin: {x, y, z} [Optional] | The location of the new origin. |
facing: {w, x, y, z} [Optional] | A quaternion representing direction the camera should face at the origin. |
/*jshint esversion: 6, asi: true, laxbreak: true*/
// taprecenter.js: Defines a playcanvas script that re-centers the AR scene when the screen is
// tapped.
var taprecenter = pc.createScript('taprecenter')
// Fire a 'recenter' event to move the camera back to its starting location in the scene.
taprecenter.prototype.initialize = function() {
this.app.touch.on(pc.EVENT_TOUCHSTART,
(event) => { if (event.touches.length !== 1) { return } this.app.fire('xr:recenter')})
}
this.app.fire('xr:screenshotrequest')
Parameters
None
Description
Emits a request to the engine to capture a screenshot of the PlayCanvas canvas. The engine will emit a xr:screenshotready event with the JPEG compressed image or xr:screenshoterror if an error has occured.
this.app.on('xr:screenshotready', (event) => {
// screenshotPreview is an <img> HTML element
const image = document.getElementById('screenshotPreview')
image.src = 'data:image/jpeg;base64,' + event.detail
}, this)
this.app.on('xr:screenshoterror', (detail) => {
console.log(detail)
// Handle screenshot error.
}, this)
this.app.fire('xr:screenshotrequest')
this.app.fire('xr:showcamerafeed')
Parameters
None
Description
Shows the camera feed.
this.app.fire('xr:showcamerafeed')
this.app.fire('xr:stopxr')
Parameters
None
Description
Stop the current XR session. While stopped, the camera feed is stopped and device motion is not tracked.
this.app.fire('xr:stopxr')
Description
Provides a camera pipeline module that drives three.js camera to do virtual overlays.
Functions
Function | Description |
---|---|
pipelineModule | A pipeline module that interfaces with the threejs environment and lifecyle. |
xrScene | Get a handle to the xr scene, camera and renderer. |
XR8.Threejs.pipelineModule()
Description
A pipeline module that interfaces with the threejs environment and lifecyle. The threejs scene can be queried using Threejs.xrScene() after Threejs.pipelineModule()'s onStart method is called. Setup can be done in another pipeline module's onStart method by referring to Threejs.xrScene() as long as XR8.addCameraPipelineModule is called on the second module after calling XR8.addCameraPipelineModule(Threejs.pipelineModule())
.
Note that this module does not actually draw the camera feed to the canvas, GlTextureRenderer does that. To add a camera feed in the background, install the GlTextureRenderer.pipelineModule() before installing this module (so that it is rendered before the scene is drawn).
Parameters
None
Returns
A Threejs pipeline module that can be added via XR8.addCameraPipelineModule().
// Add XrController.pipelineModule(), which enables 6DoF camera motion estimation.
XR8.addCameraPipelineModule(XR8.XrController.pipelineModule())
// Add a GlTextureRenderer which draws the camera feed to the canvas.
XR8.addCameraPipelineModule(XR8.GlTextureRenderer.pipelineModule())
// Add Threejs.pipelineModule() which creates a threejs scene, camera, and renderer, and
// drives the scene camera based on 6DoF camera motion.
XR8.addCameraPipelineModule(XR8.Threejs.pipelineModule())
// Add custom logic to the camera loop. This is done with camera pipeline modules that provide
// logic for key lifecycle moments for processing each camera frame. In this case, we'll be
// adding onStart logic for scene initialization, and onUpdate logic for scene updates.
XR8.addCameraPipelineModule({
// Camera pipeline modules need a name. It can be whatever you want but must be unique
// within your app.
name: 'myawesomeapp',
// onStart is called once when the camera feed begins. In this case, we need to wait for the
// XR8.Threejs scene to be ready before we can access it to add content.
onStart: ({canvasWidth, canvasHeight}) => {
// Get the 3js scene. This was created by XR8.Threejs.pipelineModule().onStart(). The
// reason we can access it here now is because 'myawesomeapp' was installed after
// XR8.Threejs.pipelineModule().
const {scene, camera} = XR8.Threejs.xrScene()
// Add some objects to the scene and set the starting camera position.
myInitXrScene({scene, camera})
// Sync the xr controller's 6DoF position and camera paremeters with our scene.
XR8.XrController.updateCameraProjectionMatrix({
origin: camera.position,
facing: camera.quaternion,
})
},
// onUpdate is called once per camera loop prior to render. Any 3js geometry scene would
// typically happen here.
onUpdate: () => {
// Update the position of objects in the scene, etc.
updateScene(XR8.Threejs.xrScene())
},
})
XR8.Threejs.xrScene()
Description
Get a handle to the xr scene, camera, renderer and camera feed texture.
Parameters
None
Returns
An object: { scene, camera, renderer }
Property | Description |
---|---|
scene | The Threejs scene. |
camera | The Threejs main camera. |
renderer | The Threejs renderer. |
cameraTexture | Threejs Texture with camera feed cropped to canvas' size. |
const {scene, camera, renderer, cameraTexture} = XR8.Threejs.xrScene()
Description
Utilities to talk to VPS services.
Functions
Function | Description |
---|---|
makeWayspotWatcher | Create a watcher to look for all Wayspots, not just Project Wayspots. |
projectWayspots | Returns a promise with an array of ClientWayspotInfo , which contains data about each of your project wayspots. |
XR8.Vps.makeWayspotWatcher({onVisible, onHidden, pollGps, lat, lng})
Description
Create a watcher to look for all Wayspots, not just Project Wayspots.
Parameters
Parameter | Description |
---|---|
onVisible [Optional] | Callback that is called when a new wayspot becomes visible within a 1000 meter radius. |
onHidden [Optional] | Callback that is called when a wayspot you previously saw is no longer within a 1000 meter radius from you. |
pollGps [Optional] | If true, turns on GPS and calls ‘onVisible’ and ‘onHidden’ callbacks with any wayspots found/lost through GPS movement. |
lat [Optional] | If lat or lng is set, calls onVisible and onHidden callbacks with any wayspots found/lost near the set location. |
lng [Optional] | If lat or lng is set, calls onVisible and onHidden callbacks with any wayspots found/lost near the set location. |
Returns
An object with the following methods:
{dispose(), pollGps(), setLatLng()}
Method | Description |
---|---|
dispose() | Clears state and stops gps. Updates and will no longer call any callbacks. |
pollGps(Boolean) | Turn on or off gps updates. |
setLatLng(lat: Number, lng: Number) | Set the watcher's current location to lat / lng . |
const nearbyWayspots_ = []
// Records the time between getting each wayspot from the wayspotWatcher.
let gotAllWayspotsTimeout_ = 0
const onWayspotVisible = (wayspot) => {
nearbyWayspots_.push(wayspot)
window.clearTimeout(gotAllWayspotsTimeout_)
gotAllWayspotsTimeout_ = window.setTimeout(() => {
// We get the wayspots individually. If want to only perform an operation
// after we have gotten all the nearby ones, we could do that here.
}, 0)
}
const onWayspotHidden = (wayspot) => {
const index = nearbyWayspots_.indexOf(wayspot)
if (index > -1) {
foundProjectWayspots_.splice(index, 1)
}
}
const onAttach = ({}) => {
wayspotWatcher_ = XR8.Vps.makeWayspotWatcher(
{onVisible: onWayspotVisible, onHidden: onWayspotHidden, pollGps: true}
)
}
const onDetach = ({}) => {
// Cleanup the watcher
wayspotWatcher_.dispose()
}
XR8.Vps.projectWayspots()
Description
Query data about each of your project wayspots.
Parameters
None
Returns
A promise with an array of ClientWayspotInfo
, which contains data about each of your project wayspots.
[{id, name, imageUrl, title, lat, lng }]
Property | Type | Description |
---|---|---|
id | String | id for this Wayspot, only stable within a session. |
name [Optional] | String | A reference to a Project Wayspot. |
imageUrl | String | URL to a representative image for this Wayspot. |
title | String | The Wayspot's title. |
lat | Number | Latitude of the Project Wayspot. |
lng | Number | Longitude of the Project Wayspot. |
// Log the project wayspots.
XR8.Vps.projectWayspots().then((projectWayspots) => {
projectWayspots.forEach((projectWayspot) => {
console.log('projectWayspot: ', projectWayspot)
})
})
Description
Provides a module that generates a Coaching Overlay for your Lightship VPS enabled Web AR experience.
For information on the Absolute Scale Coaching Overlay, please see here.
Functions
Function | Description |
---|---|
configure | Configures Coaching Overlay settings. |
pipelineModule | Creates a camera pipeline module that, when installed, adds coaching overlay functionality to your project. |
VpsCoachingOverlay.configure({ wayspotName, hintImage, animationColor, animationDuration, textColor, promptPrefix, promptSuffix, statusText, disablePrompt })
Description
Configures behavior and look of the Lightship VPS coaching overlay.
Parameters (All Optional)
Parameter | Type | Default | Description |
---|---|---|---|
wayspotName | String | The name of the Wayspot which the coaching overlay is guiding the user to localize at. If no Wayspot name is specified, it will use the nearest project Wayspot. If the project does not have any project Wayspots, then it will use the nearest wayspot. | |
hintImage | String | Image displayed to the user to guide them to the real-world location. If no hint-image is specified, it will use the default image for the Wayspot. If the Wayspot does not have a default image, no image will be shown. | |
animationColor | String | "#FFFFFF" | Color of the coaching overlay animation. This parameter accepts valid CSS color arguments. |
animationDuration | Number | 5000 | Total time the hint image is displayed before shrinking (in milliseconds). |
textColor | String | "#FFFFFF" | Color of all the coaching overlay text. This parameter accepts valid CSS color arguments. |
promptPrefix | String | "Point your camera at" | Sets the text string for advised user action above the name of the Wayspot. |
promptSuffix | String | "and move around" | Sets the text string for advised user action below the name of the Wayspot. |
statusText | String | "Scanning..." | Sets the text string that is displayed below the hint-image when it is in the shrunken state. |
disablePrompt | Boolean | false | Set to true to hide default coaching overlay in order to use coaching overlay events for a custom overlay. |
VpsCoachingOverlay.configure({
textColor: '#0000FF',
promptPrefix: 'Go look for',
})
VpsCoachingOverlay.pipelineModule()
Description
Creates a pipeline module that, when installed, adds VPS coaching overlay functionality to your Lightship VPS enabled WebAR project.
Parameters
None
Returns
A pipeline module that adds a VPS coaching overlay to your project.
// Configured here
VpsCoachingOverlay.configure({
textColor: '#0000FF',
promptPrefix: 'Go look for',
})
XR8.addCameraPipelineModules([
XR8.GlTextureRenderer.pipelineModule(),
XR8.Threejs.pipelineModule(),
XR8.XrController.pipelineModule(),
XRExtras.FullWindowCanvas.pipelineModule(),
XRExtras.Loading.pipelineModule(),
XRExtras.RuntimeError.pipelineModule(),
LandingPage.pipelineModule(),
// Added here
VpsCoachingOverlay.pipelineModule(),
...
])
Enumeration
Description
Desired camera to use.
Properties
Property | Value | Description |
---|---|---|
FRONT | front | Use the front facing / selfie camera. |
BACK | back | Use the rear facing / back camera. |
Enumeration
Description
Specify the class of devices that the pipeline should run on. If the current device is not in that class, running will fail prior prior to opening the camera. If allowedDevices is XR8.XrConfig.device().ANY
, always open the camera.
Note: World Effects (SLAM) can only be used with XR8.XrConfig.device().MOBILE_AND_HEADSETS
or XR8.XrConfig.device().MOBILE
.
Properties
Property | Value | Description |
---|---|---|
MOBILE | mobile | Restrict camera pipeline on mobile-class devices, for example phones and tablets. |
MOBILE_AND_HEADSETS | mobile-and-headsets | Restrict camera pipeline on mobile and headset class devices. |
ANY | any | Start running camera pipeline without checking device capabilities. This may fail at some point in the pipeline startup if a required sensor is not available at run time (for example, a laptop has no camera). |
Description
XrController provides 6DoF camera tracking and interfaces for configuring tracking.
Functions
Function | Description |
---|---|
configure | Configures what processing is performed by XrController (may have performance implications). |
hitTest | Estimate the 3D position of a point on the camera feed. |
pipelineModule | Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started, camera proessing events, and other state changes. These are used to calculate the camera's position. |
recenter | Repositions the camera to the origin / facing direction specified by updateCameraProjectionMatrix and restart tracking. |
updateCameraProjectionMatrix | Reset the scene's display geometry and the camera's starting position in the scene. The display geometry is needed to properly overlay the position of objects in the virtual scene on top of their corresponding position in the camera image. The starting position specifies where the camera will be placed and facing at the start of a session. |
XrController.configure({ disableWorldTracking, enableLighting, enableWorldPoints, enableVps, imageTargets: [], leftHandedAxes, mirroredDisplay, scale })
Description
Configures the processing performed by XrController (some settings may have performance implications).
Parameters
Parameter | Type | Default | Description |
---|---|---|---|
disableWorldTracking [Optional] | Boolean | false | If true, turn off SLAM tracking for efficiency. This needs to be done BEFORE XR8.Run() is called. |
enableLighting [Optional] | Boolean | false | If true, lighting will be provided by XrController.pipelineModule() as processCpuResult.reality.lighting |
enableWorldPoints [Optional] | Boolean | false | If true, worldPoints will be provided by XrController.pipelineModule() as processCpuResult.reality.worldPoints . |
enableVps [Optional] | Boolean | false | If true, look for Project Wayspots and a mesh. The mesh that is returned has no relation to Project Wayspots and will be returned even if no Project Wayspots are configured. Enabling VPS overrides settings for scale and disableWorldTracking . |
imageTargets [Optional] | Array | List of names of the image target to detect. Can be modified at runtime. Note: All currently active image targets will be replaced with the ones specified in this list. | |
leftHandedAxes [Optional] | Boolean | false | If true, use left-handed coordinates. |
mirroredDisplay [Optional] | Boolean | false | If true, flip left and right in the output. |
scale [Optional] | String | responsive |
Either responsive or absolute . responsive will return values so that the camera on frame 1 is at the origin defined via XR8.XrController.updateCameraProjectionMatrix(). absolute will return the camera, image targets, etc in meters. When using absolute the x-position, z-position, and rotation of the starting pose will respect the parameters set in XR8.XrController.updateCameraProjectionMatrix() once scale has been estimated. The y-position will depend on the camera's physical height from the ground plane. |
IMPORTANT: disableWorldTracking: true
needs to be set BEFORE both XR8.XrController.pipelineModule() and XR8.Run() are called and cannot be modifed while the engine is running.
XR8.XrController.configure({enableLighting: true, disableWorldTracking: false, scale: 'absolute'})
XR8.XrController.configure({enableVps: true})
// Disable world tracking (SLAM)
XR8.XrController.configure({disableWorldTracking: true})
// Open the camera and start running the camera run loop
XR8.run({canvas: document.getElementById('camerafeed')})
XR8.XrController.configure({imageTargets: ['image-target1', 'image-target2', 'image-target3']})
XrController.hitTest(X, Y, includedTypes = [])
Description
Estimate the 3D position of a point on the camera feed. X and Y are specified as numbers between 0 and 1, where (0, 0) is the upper left corner and (1, 1) is the lower right corner of the camera feed as rendered in the camera that was specified by updateCameraProjectionMatrix. Mutltiple 3d position estimates may be returned for a single hit test based on the source of data being used to estimate the position. The data source that was used to estimate the position is indicated by the hitTest.type.
Parameters
Parameter | Description |
---|---|
X | Value between 0 and 1 that represents the horizontal position on camera feed from left to right. |
Y | Value between 0 and 1 that represents the vertical position on camera feed from top to bottom. |
includedTypes | List of one or more of: 'FEATURE_POINT' , 'ESTIMATED_SURFACE' or 'DETECTED_SURFACE' . Note: Currently only 'FEATURE_POINT' is supported. |
Returns
An array of estimated 3D positions from the hit test:
[{ type, position, rotation, distance }]
Property | Description |
---|---|
type | One of 'FEATURE_POINT' , 'ESTIMATED_SURFACE' , 'DETECTED_SURFACE' , or 'UNSPECIFIED' |
position: {x, y, z} |
The estimated 3D position of the queried point on the camera feed. |
rotation: {x, y, z, w} |
The estimated 3D rotation of the queried point on the camera feed. |
distance | The estimated distance from the device of the queried point on the camera feed. |
const hitTestHandler = (e) => {
const x = e.touches[0].clientX / window.innerWidth
const y = e.touches[0].clientY / window.innerHeight
const hitTestResults = XR8.XrController.hitTest(x, y, ['FEATURE_POINT'])
}
XR8.XrController.pipelineModule()
Parameters
None
Description
Creates a camera pipeline module that, when installed, receives callbacks on when the camera has started, camera proessing events, and other state changes. These are used to calculate the camera's position.
Returns
Return value is an object made available to onUpdate as:
processCpuResult.reality: { rotation, position, intrinsics, trackingStatus, trackingReason, worldPoints, realityTexture, lighting }
Property | Description |
---|---|
rotation: {w, x, y, z} |
The orientation (quaternion) of the camera in the scene. |
position: {x, y, z} |
The position of the camera in the scene. |
intrinsics | A column-major 4x4 projection matrix that gives the scene camera the same field of view as the rendered camera feed. |
trackingStatus | One of 'LIMITED' or 'NORMAL' . |
trackingReason | One of 'UNSPECIFIED' or'INITIALIZING' . |
worldPoints: [{id, confidence, position: {x, y, z}}] |
An array of detected points in the world at their location in the scene. Only filled if XrController is configured to return world points and trackingReason != INITIALIZING. |
realityTexture | The WebGLTexture containing camera feed data. |
lighting: {exposure, temperature} |
Exposure of the lighting in your environment. Note: temperature has not yet been implemented. |
Dispatched Events
trackingStatus: Fires when XrController starts and tracking status or reason changes.
reality.trackingstatus : { status, reason }
Property | Description |
---|---|
status | One of 'LIMITED' or 'NORMAL' . |
reason | One of 'INITIALIZING' or 'UNDEFINED' . |
imageloading: Fires when detection image loading begins.
imageloading.detail : { imageTargets: {name, type, metadata} }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' . |
metadata | User metadata. |
imagescanning: Fires when all detection images have been loaded and scanning has begun.
imagescanning.detail : { imageTargets: {name, type, metadata, geometry} }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' . |
metadata | User metadata. |
geometry | Object containing geometry data. If type=FLAT: {scaledWidth, scaledHeight} , lse if type=CYLINDRICAL or type=CONICAL: {height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians} |
If type = FLAT
, geometry:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
, geometry:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
imagefound: Fires when an image target is first found.
imagefound.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
imageupdated: Fires when an image target changes position, rotation or scale.
imageupdated.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
imagelost: Fires when an image target is no longer being tracked.
imagelost.detail : { name, type, position, rotation, scale, scaledWidth, scaledHeight, height, radiusTop, radiusBottom, arcStartRadians, arcLengthRadians }
Property | Description |
---|---|
name | The image's name. |
type | One of 'FLAT' , 'CYLINDRICAL' , 'CONICAL' .` |
position: {x, y, z} |
The 3d position of the located image. |
rotation: {w, x, y, z} |
The 3d local orientation of the located image. |
scale | A scale factor that should be applied to object attached to this image. |
If type = FLAT
:
Property | Description |
---|---|
scaledWidth | The width of the image in the scene, when multiplied by scale. |
scaledHeight | The height of the image in the scene, when multiplied by scale. |
If type= CYLINDRICAL
or CONICAL
:
Property | Description |
---|---|
height | Height of the curved target. |
radiusTop | Radius of the curved target at the top. |
radiusBottom | Radius of the curved target at the bottom. |
arcStartRadians | Starting angle in radians. |
arcLengthRadians | Central angle in radians. |
meshfound: Fires when a mesh is first found either after start or after a recenter().
xrmeshfound.detail : { id, position, rotation, geometry }
Property | Description |
---|---|
id | An id for this mesh that is stable within a session |
position: {x, y, z} |
The 3d position of the located mesh. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located mesh. |
geometry: {index, attributes} |
An object raw mesh geometry data. Attributes contain position and color attributes. |
geometry
is an object with the following properties:
Property | Description |
---|---|
index: Uint32Array() |
The vertices of the mesh where 3 contiguous vertices make up a triangle. |
attributes | [ {name: 'position', array: Float32Array(), itemSize: 3}, {name: 'color', array: Float32Array(), itemSize: 3} ] |
meshupdated: Fires when a the first mesh we found changes position or rotation.
meshupdated.detail : { id, position, rotation }
Property | Description |
---|---|
id | An id for this mesh that is stable within a session |
position: {x, y, z} |
The 3d position of the located mesh. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located mesh. |
meshlost: Fires when recenter is called.
xrmeshlost.detail : { id }
Property | Description |
---|---|
id | An id for this mesh that is stable within a session. |
projectwayspotscanning: Fires when all Project Wayspots have been loaded for scanning.
projectwayspotscanning.detail : { wayspots: [] }
Property | Description |
---|---|
wayspots: [] | An array objects containing Wayspot information. |
wayspots
is an array of objects with the following properties:
Property | Description |
---|---|
id | An id for this Project Wayspot that is stable within a session |
name | Project Wayspot name. |
imageUrl | URL to a representative image for this Project Wayspot. |
title | Project Wayspot title. |
lat | Latitude of this Project Wayspot. |
lng | Longitude of this Project Wayspot. |
projectwayspotfound: Fires when a Project Wayspot is first found.
projectwayspotfound.detail : { name, position, rotation }
Property | Description |
---|---|
name | The Project Wayspot name. |
position: {x, y, z} |
The 3d position of the located Project Wayspot. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located Project Wayspot. |
projectwayspotupdated: Fires when a Project Wayspot changes position or rotation.
projectwayspotupdated.detail : { name, position, rotation }
Property | Description |
---|---|
name | The Project Wayspot name. |
position: {x, y, z} |
The 3d position of the located Project Wayspot. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located Project Wayspot. |
projectwayspotlost: Fires when a Project Wayspot is no longer being tracked.
projectwayspotlost.detail : { name, position, rotation }
Property | Description |
---|---|
name | The Project Wayspot name. |
position: {x, y, z} |
The 3d position of the located Project Wayspot. |
rotation: {w, x, y, z} |
The 3d local orientation (quaternion) of the located Project Wayspot. |
XR8.addCameraPipelineModule(XR8.XrController.pipelineModule())
const logEvent = ({name, detail}) => {
console.log(`Handling event ${name}, got detail, ${JSON.stringify(detail)}`)
}
XR8.addCameraPipelineModule({
name: 'eventlogger',
listeners: [
{event: 'reality.imageloading', process: logEvent },
{event: 'reality.imagescanning', process: logEvent },
{event: 'reality.imagefound', process: logEvent},
{event: 'reality.imageupdated', process: logEvent},
{event: 'reality.imagelost', process: logEvent},
],
})
XR8.XrController.recenter()
Parameters
None
Description
Repositions the camera to the origin / facing direction specified by updateCameraProjectionMatrix and restart tracking.
XR8.XrController.updateCameraProjectionMatrix({ cam, origin, facing })
Description
Reset the scene's display geometry and the camera's starting position in the scene. The display geometry is needed to properly overlay the position of objects in the virtual scene on top of their corresponding position in the camera image. The starting position specifies where the camera will be placed and facing at the start of a session.
Parameters
Parameter | Description |
---|---|
cam [Optional] | { pixelRectWidth, pixelRectHeight, nearClipPlane, farClipPlane } |
origin: { x, y, z } [Optional] |
The starting position of the camera in the scene. |
facing: { w, x, y, z } [Optional] |
The starting direction (quaternion) of the camera in the scene. |
cam
has the following parameters:
Parameter | Description |
---|---|
pixelRectWidth | The width of the canvas that displays the camera feed. |
pixelRectHeight | The height of the canvas that displays the camera feed. |
nearClipPlane | The closest distance to the camera at which scene objects are visible. |
farClipPlane | The farthest distance to the camera at which scene objects are visible. |
XR8.XrController.updateCameraProjectionMatrix({ origin: { x: 1, y: 4, z: 0 }, facing: { w: 0.9856, x: 0, y: 0.169, z: 0 } })
Description
Provides information about device compatibility and characteristics.
Properties
Property | Type | Description |
---|---|---|
IncompatibilityReasons | Enum | The possible reasons for why a device and browser may not be compatible with 8th Wall Web. |
Functions
Function | Description |
---|---|
deviceEstimate | Returns an estimate of the user's device (e.g. make / model) based on user agent string and other factors. This information is only an estimate, and should not be assumed to be complete or reliable. |
incompatibleReasons | Returns an array of XrDevice.IncompatibilityReasons why the device the device and browser are not supported. This will only contain entries if XR8.XrDevice.isDeviceBrowserCompatible() returns false. |
incompatibleReasonDetails | Returns extra details about the reasons why the device and browser are incompatible. This information should only be used as a hint to help with further error handling. These should not be assumed to be complete or reliable. This will only contain entries if XR8.XrDevice.isDeviceBrowserCompatible() returns false. |
isDeviceBrowserCompatible | Returns an estimate of whether the user's device and browser is compatible with 8th Wall Web. If this returns false, XrDevice.incompatibleReasons() will return reasons about why the device and browser are not supported. |
Enumeration
Description
The possible reasons for why a device and browser may not be compatible with 8th Wall Web.
Properties
Property | Value | Description |
---|---|---|
UNSPECIFIED | 0 | The incompatible reason is not specified. |
UNSUPPORTED_OS | 1 | The estimated operating system is not supported. |
UNSUPPORTED_BROWSER | 2 | The estimated browser is not supported. |
MISSING_DEVICE_ORIENTATION | 3 | The browser does not support device orientation events. |
MISSING_USER_MEDIA | 4 | The browser does not support user media acccess. |
MISSING_WEB_ASSEMBLY | 5 | The browser does not support web assembly. |
XR8.XrDevice.deviceEstimate()
Description
Returns an estimate of the user's device (e.g. make / model) based on user agent string and other factors. This information is only an estimate, and should not be assumed to be complete or reliable.
Parameters
None
Returns
An object: { locale, os, osVersion, manufacturer, model }
Property | Description |
---|---|
locale | The user's locale. |
os | The device's operating system. |
osVersion | The device's operating system version. |
manufacturer | The device's manufacturer. |
model | The device's model. |
XR8.XrDevice.incompatibleReasons({ allowedDevices })
Description
Returns an array of XR8.XrDevice.IncompatibilityReasons why the device the device and browser are not supported. This will only contain entries if XR8.XrDevice.isDeviceBrowserCompatible() returns false.
Parameters
Parameter | Description |
---|---|
allowedDevices [Optional] | Supported device classes, a value in XR8.XrConfig.device(). |
Returns
Returns an array of XrDevice.IncompatibleReasons
const reasons = XR8.XrDevice.incompatibleReasons()
for (let reason of reasons) {
switch (reason) {
case XR8.XrDevice.IncompabilityReasons.UNSUPPORTED_OS:
// Handle unsupported os error messaging.
break;
case XR8.XrDevice.IncompabilityReasons.UNSUPPORTED_BROWSER:
// Handle unsupported browser
break;
...
}
XR8.XrDevice.incompatibleReasonDetails({ allowedDevices })
Description
Returns extra details about the reasons why the device and browser are incompatible. This information should only be used as a hint to help with further error handling. These should not be assumed to be complete or reliable. This will only contain entries if XrDevice.isDeviceBrowserCompatible() returns false.
Parameters
Parameter | Description |
---|---|
allowedDevices [Optional] | Supported device classes, a value in XR8.XrConfig.device(). |
Returns
An object: { inAppBrowser, inAppBrowserType }
Property | Description |
---|---|
inAppBrowser | The name of the in-app browser detected (e.g. 'Twitter' ) |
inAppBrowserType | A string that helps describe how to handle the in-app browser. |
XR8.XrDevice.isDeviceBrowserCompatible({ allowedDevices })
Description
Returns an estimate of whether the user's device and browser is compatible with 8th Wall Web. If this returns false, XrDevice.incompatibleReasons() will return reasons about why the device and browser are not supported.
Parameters
Parameter | Description |
---|---|
allowedDevices [Optional] | Supported device classes, a value in XR8.XrConfig.device(). |
Returns
True or false.
XR8.XrDevice.isDeviceBrowserCompatible({allowedDevices: XR8.XrConfig.device().MOBILE})
Description
Utilities for specifying permissions required by a pipeline module.
Modules can indicate what browser capabilities they require that may need permissions requests. These can be used by the framework to request appropriate permissions if absent, or to create components that request the appropriate permissions before running XR.
Properties
Property | Type | Description |
---|---|---|
permissions() | Enum | List of permissions that can be specified as required by a pipeline module. |
XR8.addCameraPipelineModule({
name: 'request-gyro',
requiredPermissions: () => ([XR8.XrPermissions.permissions().DEVICE_ORIENTATION]),
})
Enumeration
Description
Permissions that can be required by a pipeline module.
Properties
Property | Value | Description |
---|---|---|
CAMERA | camera | Require camera. |
DEVICE_MOTION | devicemotion | Require accelerometer. |
DEVICE_ORIENTATION | deviceorientation | Require gyro. |
DEVICE_GPS | geolocation | Require GPS location. |
MICROPHONE | microphone | Require microphone. |
XR8.addCameraPipelineModule({
name: 'request-gyro',
requiredPermissions: () => ([XR8.XrPermissions.permissions().DEVICE_ORIENTATION]),
})
The 8th Wall Image Target API enables developers to dynamically manage the image target library associated with their 8th Wall powered WebAR projects. This API and accompanying documentation is designed for developers familiar with web development & 8th Wall image targets.
Before you begin: Before you start using the Image Target API, your workspace needs to be on an Enterprise billing plan. To upgrade, contact sales.
Authentication is provided by secret keys. Workspaces on an Enterprise plan can request an API Key. You'll include this secret key in each request to verify the request is authorized. Since the key is scoped to your workspace, the key will have access to all image targets inside all apps in that workspace.
You can view your key on your account page.
Important
The Image Target API key is a B2B key associated with your workspace. Follow best practices to secure your API Key as publicly exposing your API key can result in unintended use and unauthorized access. In particular please avoid:
Note: These limits only apply to usage of the Image Target Platform API. It is not applicable to end-user activations of a Web AR experience.
To request an increase to the Image Target API quota limits for projects in your workspace please send a request to support.
Upload a new target to an app's list of image targets
Request
curl -X POST "https://api.8thwall.com/v1/apps/$APP_KEY/targets" \
-H "X-Api-Key:$SECRET_KEY" \
-F "name=my-target-name" \
-F "image=@image.png"\
-F "geometry.top=0"\
-F "geometry.left=0"\
-F "geometry.width=480"\
-F "geometry.height=640"\
-F "metadata={\"customFlag\":true}"
-F "loadAutomatically=true"
Field | Type | Default Value | Description |
---|---|---|---|
image | Binary data | PNG or JPEG format, must be at least 480x640, less than 2048x2048, and less than 10MB | |
name | string | Must be unique within an app, cannot include tildes (~), and cannot exceed 255 characters | |
type [Optional] | string | "PLANAR" | "PLANAR", "CYLINDER", or "CONICAL" |
metadata [Optional] | string | null | Must be valid JSON if metadataIsJson is true, and cannot exceed 2048 characters |
metadataIsJson [Optional] | boolean | true | You may set to false to use the metadata property as a raw string |
loadAutomatically [Optional] | boolean | false | Each app is limited to 5 image targets with loadAutomatically: true |
geometry.isRotated [Optional] | boolean | false | Set to true if the image is prerotated from landscape to portrait. |
geometry.top | integer | These properties specify the crop to apply to your image. It must be an aspect ratio of 3:4, and at least 480x640. | |
geometry.left | integer | ||
geometry.width | integer | ||
geometry.height | integer | ||
geometry.topRadius | integer | Only needed for type: "CONICAL" |
|
geometry.bottomRadius | integer | Only needed for type: "CONICAL" |
This diagram shows how the specified crop is applied to your uploaded image to generate the
imageUrl
and thumbnailImageUrl
. The width:height ratio is always 3:4.
For a landscape crop, upload the image as rotated 90 degrees clockwise, set
geometry.isRotated: true
, and specify the crop against the rotated image.
This diagram shows how your uploaded image is flattened and cropped based on the parameters. The
uploaded image is in a "rainbow" format where the top and bottom edges of your content are aligned
to two concentric circles. If your target is narrower on the top than the bottom, specify topRadius
as the negative of the outer radius, and bottomRadius
as the inner radius (positive). For a
landscape crop, set geometry.isRotated: true
, and the flattened image will be rotated before having the
crop applied.
Response
This is the standard JSON response format for image targets.
{
"name": "...",
"uuid": "...",
"type": "PLANAR",
"loadAutomatically": true,
"status": "AVAILABLE",
"appKey": "...",
"geometry": {
"top": 842,
"left": 392,
"width": 851,
"height": 1135,
"isRotated": true,
"originalWidth": 2000,
"originalHeight": 2000
},
"metadata": null,
"metadataIsJson": true,
"originalImageUrl": "https://cdn.8thwall.com/image-target/...",
"imageUrl": "https://cdn.8thwall.com/image-target/...",
"thumbnailImageUrl": "https://cdn.8thwall.com/image-target/...",
"geometryTextureUrl": "https://cdn.8thwall.com/image-target/...",
"created": 1613508074845,
"updated": 1613683291310
}
Property | Type | Description |
---|---|---|
name | string | |
uuid | string | Unique ID of this image target |
type | string | "PLANAR", "CYLINDER", or "CONICAL" |
loadAutomatically | boolean | |
status | string | "AVAILABLE" or "TAKEN_DOWN" |
appKey | string | The app the target belongs to |
geometry | object | See below |
metadata | string | |
metadataIsJson | boolean | |
originalImageUrl | string | CDN URL for the source image that was uploaded |
imageUrl | string | Cropped version of geometryTextureUrl |
thumbnailImageUrl | string | 350px tall version of the imageUrl for use in thumbnails |
geometryTextureUrl | string | For conical, this is an flattened version of the original image, for planar and cylinder, this is the same as originalImageUrl |
created | integer | Creation date in milliseconds after unix epoch |
updated | integer | Last updated date in milliseconds after unix epoch |
Planar Geometry
Property | Type | Description |
---|---|---|
top | integer | |
left | integer | |
width | integer | |
height | integer | |
isRotated | boolean | |
originalWidth | integer | Width of the uploaded image |
originalHeight | integer | Height of the uploaded image |
Cylinder or Conical Geometry
Extends the Planar Geometry properties with the alteration that originalWidth
and
originalHeight
refer to the dimensions of the flattened image stored at geometryTextureUrl.
Property | Type | Description |
---|---|---|
topRadius | float | |
bottomRadius | float | |
coniness | float | Always 0 for type: CYLINDER , derived from topRadius /bottomRadius for type: CONICAL |
cylinderCircumferenceTop | float | The circumference of the full circle traced by the upper edge of your target |
targetCircumferenceTop | float | The length along your the upper edge of your target before having the crop applied |
cylinderCircumferenceBottom | float | Derived from cylinderCircumferenceTop and topRadius /bottomRadius |
cylinderSideLength | float | Derived from targetCircumferenceTop and the original image dimensions |
arcAngle | float | Derived from cylinderCircumferenceTop and targetCircumferenceTop |
inputMode | string | "BASIC" or "ADVANCED". Controls what users see in the 8th Wall console, either sliders or number input boxes. |
Query for a list of image targets that belong to an app. Results are paginated, meaning if the app contains more image targets than can be returned in one response, you will need to make multiple requests to enumerate the full list of image targets.
Request
curl "https://api.8thwall.com/v1/apps/$APP_KEY/targets" -H "X-Api-Key:$SECRET_KEY"
Parameter | Type | Description |
---|---|---|
by [Optional] | string | Specifies the column to sort by. Options are "created", "updated", "name", or "uuid". |
dir [Optional] | string | Controls the sort direction of the list. Either "asc" or "desc". |
start [Optional] | string | Specifies that the list starts with items that have this value in the by column |
after [Optional] | string | Specifies that the list starts immediately after items that have this value |
limit [Optional] | integer | Must be between 1 and 500 |
continuation [Optional] | string | Used to fetch the next page after the inital query. |
Sorted List
This query lists the app's targets starting from "z" and going towards "a".
curl "https://api.8thwall.com/v1/apps/$APP_KEY/targets?by=name&dir=desc" -H "X-Api-Key:$SECRET_KEY"
Multiple sorts
You can specify a secondary "sort-by" parameter which acts as a tiebreaker in the case of duplicates in your first by
value. uuid
is used as a default tiebreaker if unspecified.
curl "https://api.8thwall.com/v1/apps/$APP_KEY/targets?by=updated&by=uuid" -H "X-Api-Key:$SECRET_KEY"
Specify a starting point
You can specify start
or after
values that correspond to the by
values to specify your current position in the list. If you want your list to start immediately after the item with updated: 333
and uuid: 777
, you'd use:
curl "https://api.8thwall.com/v1/apps/$APP_KEY/targets?by=updated&by=uuid&start=333&after=777" -H "X-Api-Key:$SECRET_KEY"
This way, items with updated: 333
are still included in the next page if their uuid
comes after 777
. If an item's updated
value is greater than 333
, but its uuid
is less than 777
, it will still be included in the next page because the second by
property only comes into play for tiebreakers.
It is not valid to specify an after
value for the main sort while providing a start
value for the tiebreaker sort. For example, it wouldn't be valid to specify ?by=name&by=uuid&after=my-name-&start=333
. This should instead be?by=name&by=uuid&after=my-name-
because the second starting point only comes into play when the main starting point is inclusive (using start
).
Response
JSON object containing the property targets
, which is an array of image target objects in the standard format.
If continuationToken
is present, to fetch the next page of image targets, you'll need to specify ?continuation=[continuationToken]
in a followup request to get the next page of image targets.
{
"continuationToken": "...",
"targets": [{
"name": "...",
"uuid": "...",
"type": "PLANAR",
"loadAutomatically": true,
"status": "AVAILABLE",
"appKey": "...",
"geometry": {
"top": 842,
"left": 392,
"width": 851,
"height": 1135,
"isRotated": true,
"originalWidth": 2000,
"originalHeight": 2000
},
"metadata": null,
"metadataIsJson": true,
"originalImageUrl": "https://cdn.8thwall.com/image-target/...",
"imageUrl": "https://cdn.8thwall.com/image-target/...",
"thumbnailImageUrl": "https://cdn.8thwall.com/image-target/...",
"geometryTextureUrl": "https://cdn.8thwall.com/image-target/...",
"created": 1613508074845,
"updated": 1613683291310
}, {
"name": "...",
"uuid": "...",
"type": "CONICAL",
"loadAutomatically": true,
"status": "AVAILABLE",
"appKey": "...",
"geometry": {
"top": 0,
"left": 0,
"width": 480,
"height": 640,
"originalWidth": 886,
"originalHeight": 2048,
"isRotated": true,
"cylinderCircumferenceTop": 100,
"cylinderCircumferenceBottom": 40,
"targetCircumferenceTop": 50,
"cylinderSideLength": 21.63,
"topRadius": 1600,
"bottomRadius": 640,
"arcAngle": 180,
"coniness": 1.3219280948873624,
"inputMode": "BASIC"
},
"metadata": "{\"my-metadata\": 34534}",
"metadataIsJson": true,
"originalImageUrl": "https://cdn.8thwall.com/...",
"imageUrl": "https://cdn.8thwall.com/...",
"thumbnailImageUrl": "https://cdn.8thwall.com/...",
"geometryTextureUrl": "https://cdn.8thwall.com/...",
"created": 1613508074845,
"updated": 1613683291310
}]
}
Request
curl "https://api.8thwall.com/v1/targets/$TARGET_UUID" -H "X-Api-Key:$SECRET_KEY"
Response
JSON object of the standard image target format
The following properties can be modified:
name
loadAutomatically
metadata
metadataIsJson
The same validation rules apply as the initial upload
For cylinder and conical image targets, the following properties of the geometry
object can also be modified:
cylinderCircumferenceTop
targetCircumferenceTop
inputMode
The other geometry properties of the target will be updated to match.
Request
curl -X PATCH "https://api.8thwall.com/v1/targets/$TARGET_UUID"\
-H "X-Api-Key:$SECRET_KEY"\
-H "Content-Type: application/json"\
--data '{"name":"new-name", "geometry: {"inputMode": "BASIC"}, "metadata": "{}"}'
Response
JSON object of the standard image target format
Request
curl -X DELETE "https://api.8thwall.com/v1/targets/$TARGET_UUID" -H "X-Api-Key:$SECRET_KEY"
Response
A successful delete will return an empty response with status code 204: No Content
.
Generate a URL that users can use to preview the tracking for a target.
Request
curl "https://api.8thwall.com/v1/targets/$TARGET_UUID/test" -H "X-Api-Key:$SECRET_KEY"
Response
{
"url": "https://8w.8thwall.app/previewit/?j=...",
"token": "...",
"exp": 1612830293128
}
Property | Type | Description |
---|---|---|
url | string | The URL that can be used to preview the target tracking |
token | string | This token can currently only be used by our preview app. |
exp | integer | The timestamp in milliseconds of when the token will expire. Tokens expire one hour after being issued. |
Preview functionality is intended to be used in the context of a specific user managing or configuring image targets. Do not publish preview URLs to a public site or share with a large number of users.
Best practices for custom preview experiences: The preview URL that is returned by the API is the 8th Wall generic preview image target experience. If you would like to further customize the frontend of your image target preview take the following steps:
XR8.XrController.configure({imageTargets: ['theTargetName']})
.If the API rejects your request, the response will be Content-Type: application/json
, and the
body will contain a message
property containing an error string.
Example
{
"message": "App not found: ..."
}
Status Codes
Status | Reason |
---|---|
400 | This can happen if you've specified an invalid value, or provided a parameter that does not exist. |
403 | This can happen if you are not providing your secret key correctly. |
404 | The app or image target could be deleted, or the app key or target UUID is incorrect. This is also the response code if the provided API key doesn't match the resource you're attempting to access. |
413 | The uploaded image has been rejected for being too large a file. |
429 | Your API Key has exceeded its associated rate limit. |
Issue: When trying to view my Web App, I receive a "Device Not Authorized" error message.
Safari specific:
The situation:
Why does this happen?
Safari has a feature called Intelligent Tracking Prevention that can block third party cookies (what we use to authorize your device while you're developing). When they get blocked, we can't verify your device.
Steps to fix:
Settings>Safari>Prevent Cross-Site Tracking
Settings>Safari>Advanced>Website Data>8thwall.com
Settings>Safari>Clear History and Website Data
Otherwise
See Invalid App Key steps from #5 onwards for more troubleshooting.
Issue: When trying to view a self-hosted Web AR experience, I receive a "Domain Not Authorized" error message.
Solutions:
Make sure you have white-listed the domain(s) of your web server. Self-Hosted domains are subdomain specific - e.g. "mydomain.com" is NOT the same as "www.mydomain.com". If you will be hosting at both mydomain.com and www.mydomain.com, you must specify BOTH. Please refer to the Connected Domains (see Self Hosted Projects) section of the docs for more info.
If Domain='' (empty), check the RefererPolicy
settings on your web server.
In the screenshot above, the Domain=
value is empty. It should be set to the domain of your self-hosted WebAR experience. In this situation, the Referer Policy
of your web server is too restrictive. The Referer
http header is used to verify that your app key is being used from an approved/whitelisted server.
To verify the configuration, open the Chrome/Safari debugger and look at Network tab. The xrweb
Request Headers should include a Referer
value, and this needs to match the domain(s) you have whitelisted in your project settings.
Incorrect - In this screenshot the Referrer Policy is set to "same-origin". This means a referrer will only be sent for same-site origins, but cross-origin requests will not send referrer information:
Correct - The xrweb
Request Headers includes a Referer
value.
The default value of "strict-origin-when-cross-origin" is recommended. Please refer to https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy for configuration options.
Issue: When trying to view my Web App, I receive an "Invalid App Key" or "Domain Not Authorized" error message.
Troubleshooting steps:
Issue: When using high resolution and/or a large number of textures on certain versions of iOS, Safari can run out of GPU memory. The textures may render black or cause the page to crash.
Workarounds:
Reduce the size/resolution of the textures used in your scene (see texture optimization)
Disable image bitmaps on iOS devices:
There are existing bugs in iOS 14 and iOS 15 related to image bitmaps that can cause texture issues. Disable image bitmaps to help prevent black textures and crashes. See example below:
// Bitmaps can cause texture issues on iOS. This workaround can help prevent black textures and crashes.
const IS_IOS =
/^(iPad|iPhone|iPod)/.test(window.navigator.platform) ||
(/^Mac/.test(window.navigator.platform) && window.navigator.maxTouchPoints > 1)
if (IS_IOS) {
window.createImageBitmap = undefined
}
Issue
When accessing a WebAR experience, the page is stuck on the Loading screen with an "infinite spinner".
Why does this happen?
If you are using the XRExtras loading
module (which is included with all 8th Wall projects and examples by default), the loading screen is displayed while the scene and assets are loading, and while the browser is waiting for browser permissions to be accepted. If the scene takes a long time to load, or if something prevents the scene from fully initializing, it can appears to be "stuck" on this screen forever.
Potential Causes
If you are in a location with slow wifi and/or cellular service while attempting to load a Web AR page with large assets, the scene may not really be "stuck", but rather just taking a long time to load. Use the browser's Network inspector to see if your page is simply in process of downloading assets.
Additionally, try to optimize scene assets as much as possible. This can include techniques such as compressing textures, reducing texture and/or video resolution, and reducing the polygon count of 3D models.
Some devices/browsers may not let you open the camera if it's already in use by another tab. Try closing any other tabs that may be using the camera, then re-load the page.
If you have added custom HTML/CSS elements to your Web AR experience, make sure that they are properly overlaid on top of the scene. If the video element on the page is pushed off-screen, iOS Safari won't render the video feed. This in turn triggers a series of events that make it appear as if 8th Wall is "stuck". In reality, here is what is going on:
Video feed doesn't render -> AFrame scene doesn't fully initialize -> AFrame scene never emits the "loaded" event -> XRExtras Loading module never disappears (as it it's listening for the scen's "loading" event which never fires!)
To resolve this, we recommend using the Safari inspector's "Layout" view to visualize the positioning of your DOM content. Often times, you'll see something similar to the image below where the video element is pushed "off the screen" / "below the fold".
To resolve, adjust the CSS positioning of your elements so they do not push the video feed off the screen. Using absolute
positioning is one way to do this.
Issue: As I move my phone, the camera position does not update.
Resolution: Check the position of the camera in your scene. The camera should NOT be at a height (Y) of zero. Set it to Non-Zero value. The Y position of the camera at start effectively determines the scale of virtual content on a surface (e.g. smaller y, bigger content)
Issue: Content in my scene doesn't appear to be "sticking" to a surface properly
Resolution:
To place an object on a surface, the base of the object needs to be at a height of Y=0
Note: Setting the position at a height of Y=0 isn't necesarily sufficient.
For example, if the transform your model is at the center of the object, placing it at Y=0 will result in part of the object living below the surface. In this case you'll need to adjust the vertical position of the object so that the bottom of the object sits at Y=0.
It's often helpful to visualize object positioning relative to the surface by placing a semi-transparent plane at Y=0.
<a-plane position="0 0 0" rotation="-90 0 0" width="4" height="4" material="side: double; color: #FFFF00; transparent: true; opacity: 0.5" shadow></a-plane>
// Create a 1x1 Plane with a transparent yellow material
var geometry = new THREE.PlaneGeometry( 1, 1, 1, 1 ); // THREE.PlaneGeometry (width, height, widthSegments, heightSegments)
var material = new THREE.MeshBasicMaterial( {color: 0xffff00, transparent:true, opacity:0.5, side: THREE.DoubleSide} );
var plane = new THREE.Mesh( geometry, material );
// Rotate 90 degrees (in radians) along X so plane is parallel to ground
plane.rotateX(1.5708)
plane.position.set(0, 0, 0)
scene.add( plane );
Issue:
I'm using the "serve" script (from 8th Wall Web's public GitHub repo: https://github.com/8thwall/web) to run a local webserver on my laptop and it says it's listening on 127.0.0.1. My phone is unable to connect to the laptop using that IP address.
"127.0.0.1" is the loopback address of your laptop (aka "localhost"), so other devices such as your phone won't be able to connect directly to that IP address. For some reason, the serve
script has decided to listen on the loopback interface.
Resolution:
Please re-run the serve
script with the -i
flag and specify the network interface you wish to use.
Example (Mac):
./serve/bin/serve -d gettingstarted/xraframe/ -p 7777 -i en0
Example (Windows):
Note: Run the following command using a standard Command Prompt window (cmd.exe). The script will generate errors if run from PowerShell.
serve\bin\serve.bat -d gettingstarted\xraframe -p 7777 -i WiFi
If you are still unable to connect, please check the following:
Need some help? 8th Wall is here to help you succeed. Contact us directly, or reach out to the community to get answers.
Ways to get help:
Slack | Email Support |
---|---|
![]() |
![]() |
8th Wall's websites and SDKs may incorporate open source packages. Please see https://www.8thwall.com/open-source-licenses for details.
[1] Intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for 8th Wall’s products remain at the sole discretion of 8th Wall, Inc.