The definitive source of the best
JavaScript libraries, frameworks, and plugins.

  • ×

    Jeeliz Face Filter

    Javascript/WebGL lightweight face tracking library designed for augmented reality webcam filters. Features : multiple faces detection, rotation, mouth opening. Various integration examples are provided (Three.js, Babylon.js, FaceSwap, Canvas2D, CSS3D...).
    Filed under 

    • 🔾63%Overall
    • 2,588
    • 11.6 days
    • 🕩534
    • 👥8

    JavaScript/WebGL lightweight and robust face tracking library designed for augmented reality face filters

    This JavaScript library detects and tracks the face in real time from the camera video feed captured with WebRTC. Then it is possible to overlay 3D content for augmented reality applications. We provide various demonstrations using the main WebGL 3D engines. We have included in this repository the release versions of the 3D engines to work with a determined version (they are in /libs/<name of the engine>/).

    This library is lightweight and it does not include any 3D engine or third party library. We want to keep it framework agnostic so the outputs of the library are raw: if the face is detected or not, the position and the scale of the detected face and the rotation Euler angles. But thanks to the featured helpers, examples and boilerplates, you can quickly deal with a higher level context (for motion head tracking, for face filter or face replacement...). We continuously add new demonstrations, so stay tuned!

    Table of contents


    Here are the main features of the library:

    • face detection,
    • face tracking,
    • face rotation detection,
    • mouth opening detection,
    • multiple faces detection and tracking,
    • very robust for all lighting conditions,
    • video acquisition with HD video ability,
    • mobile friendly,
    • interfaced with 3D engines like THREE.JS, BABYLON.JS, A-FRAME,
    • interfaced with more accessible APIs like CANVAS, CSS3D.


    • /demos/: source code of the demonstrations, sorted by 2D/3D engine used,
    • /dist/: core scripts of the library:
      • jeelizFaceFilter.js: main minified script,
      • jeelizFaceFilter.module.js: main minified script for use as a module (with import or require),
    • /neuralNets: trained neural network models:
      • NN_DEFAULT.json: file storing the neural network parameters, loaded by the main script,
      • NN_<xxx>.json: alternative neural network models,
    • /helpers/: scripts which can help you to use this library in some specific use cases,
    • /libs/: 3rd party libraries and 3D engines used in the demos,
    • /reactThreeFiberDemo/: NPM/React/Webpack/Three-Fiber boilerplate.

    Demonstrations and apps

    Included in this repository

    These demonstration are included in this repository. So they are released under the FaceFilter licence. You will probably find among them the perfect starting point to build your own face based augmented reality application:

    Some screenshot videos are available on Youtube. You can also subscribe to the Jeeliz Youtube channel or to the @WebARRocks Twitter account to be kept informed of our cutting edge developments.

    Third party

    These amazing applications rely on this library for face detection and tracking:

    • Vertebrae VTO: Vertebrae relies on this library for face detection and tracking for some of its virtual try-on products. You can check it out on:

      • Moscot: Click on the VIRTUAL TRY-ON button on the top-left of the product picture,
      • Goodr: Click on the VIRTUAL TRY-ON button on the top-left of the product picture,
      • Tenth Street: click on the Try it on button.

    If you have developped an application or a fun demo using this library, we would love to see it and insert a link here! Just contact us on Twitter @WebARRocks or LinkedIn


    Here we describe how to use this library. Although we planned to add new features, we will keep it backward compatible.

    Get started

    On your HTML page, you first need to include the main script between the tags <head> and </head>:

     <script src="dist/jeelizFaceFilter.js"></script>

    Then you should include a <canvas> HTML element in the DOM, between the tags <body> and </body>. The width and height properties of the <canvas> element should be set. They define the resolution of the canvas and the final rendering will be computed using this resolution. Be careful to not enlarge too much the canvas size using its CSS properties without increasing its resolution, otherwise it may look blurry or pixelated. We advise to fix the resolution to the actual canvas size. Do not forget to call JEELIZFACEFILTER.resize() if you resize the canvas after the initialization step. We strongly encourage you to use our helper /helpers/JeelizResizer.js to set the width and height of the canvas (see Optimization/Canvas and video resolutions section).

    <canvas width="600" height="600" id='jeeFaceFilterCanvas'></canvas>

    This canvas will be used by WebGL both for the computation and the 3D rendering. When your page is loaded you should launch this function:

      canvasId: 'jeeFaceFilterCanvas',
      NNCPath: '../../../neuralNets/', // path to JSON neural network model (NN_DEFAULT.json by default)
      callbackReady: function(errCode, spec){
        if (errCode){
          console.log('AN ERROR HAPPENS. ERROR CODE =', errCode);
        // [init scene with spec...]
        console.log('INFO: JEELIZFACEFILTER IS READY');
      }, //end callbackReady()
      // called at each render iteration (drawing loop)
      callbackTrack: function(detectState){
        // Render your scene here
        // [... do something with detectState]
      } //end callbackTrack()

    Optional init arguments

    • <boolean> followZRot: Allow full rotation around depth axis. Default value: false. See Issue 42 for more details,
    • <integer> maxFacesDetected: Only for multiple face detection - maximum number of faces which can be detected and tracked. Should be between 1 (no multiple detection) and 8,
    • <integer> animateDelay: It is used only in normal rendering mode (not in slow rendering mode). With this statement you can set accurately the number of milliseconds during which the browser wait at the end of the rendering loop before starting another detection. If you use the canvas of this library as a secondary element (for example in PACMAN or EARTH NAVIGATION demos) you should set a small animateDelay value (for example 2 milliseconds) in order to avoid rendering lags.
    • <function> onWebcamAsk: Function launched just before asking for the user to allow access to its camera,
    • <function> onWebcamGet: Function launched just after the user has accepted to share its video. It is called with the video element as argument,
    • <dict> videoSettings: override WebRTC specified video settings, which are by default:

      'videoElement' // not set by default. <video> element used
       // WARN: If you specify this parameter,
       //       1. all other settings will be useless
       //       2. it means that you fully handle the video aspect
       //       3. in case of using web-camera device make sure that
       //          initialization goes after `loadeddata` event of the `videoElement`,
       //          otherwise face detector will yield very low `detectState.detected` values
       //          (to be more sure also await first `timeupdate` event)
      'deviceId'            // not set by default
      'facingMode': 'user', // to use the rear camera, set to 'environment'
      'idealWidth': 800,  // ideal video width in pixels
      'idealHeight': 600, // ideal video height in pixels
      'minWidth': 480,    // min video width in pixels
      'maxWidth': 1920,   // max video width in pixels
      'minHeight': 480,   // min video height in pixels
      'maxHeight': 1920,  // max video height in pixels,
      'rotate': 0,        // rotation in degrees possible values: 0,90,-90,180
      'flipX': false      // if we should flip horizontally the video. Default: false

      If the user has a mobile device in portrait display mode, the width and height of these parameters are automatically inverted for the first camera request. If it does not succeed, we invert the width and height.

    • <dict> scanSettings: override face scan settings - see set_scanSettings(...) method for more information.

    • <dict> stabilizationSettings: override tracking stabilization settings - see set_stabilizationSettings(...) method for more information.
    • <boolean> isKeepRunningOnWinFocusLost: Whether we should keep the detection loop running even if the user switches the browser tab or minimizes the browser window. Default value is false. This option is useful for a videoconferencing app, where a face mask should be still computed if the FaceFilter window is not the active window. Even with this option toggled on, the face tracking is still slowed down when the FaceFilter window is not active.

    Error codes

    The initialization function ( callbackReady in the code snippet ) will be called with an error code ( errCode ). It can have these values:

    • false: no error occurs,
    • "GL_INCOMPATIBLE": WebGL is not available, or this WebGL configuration is not enough (there is no WebGL2, or there is WebGL1 without OES_TEXTURE_FLOAT or OES_TEXTURE_HALF_FLOAT extension),
    • "ALREADY_INITIALIZED": the library has been already initialized,
    • "NO_CANVASID": no canvas or canvas ID was specified,
    • "INVALID_CANVASID": cannot found the <canvas> element in the DOM,
    • "INVALID_CANVASDIMENSIONS": the dimensions width and height of the canvas are not specified,
    • "WEBCAM_UNAVAILABLE": cannot get access to the camera (the user has no camera, or it has not accepted to share the device, or the camera is already busy),
    • "GLCONTEXT_LOST": The WebGL context was lost. If the context is lost after the initialization, the callbackReady function will be launched a second time with this value as error code,
    • "MAXFACES_TOOHIGH": The maximum number of detected and tracked faces, specified by the optional init argument maxFacesDetected, is too high.

    The returned objects

    We detail here the arguments of the callback functions like callbackReady or callbackTrack. The reference of these objects do not change for memory optimization purpose. So you should copy their property values if you want to keep them unchanged outside the callback functions scopes.

    The initialization returned object

    The initialization callback function ( callbackReady in the code snippet ) is called with a second argument, spec, if there is no error. spec is a dictionnary having these properties:

    • <WebGLRenderingContext> GL: the WebGL context. The rendering 3D engine should use this WebGL context,
    • <canvas> canvasElement: the <canvas> element,
    • <WebGLTexture> videoTexture: a WebGL texture displaying the camera video. It has the same resolution as the camera video,
    • [<float>, <float>, <float>, <float>] videoTransformMat2: flatten 2x2 matrix encoding a scaling and a rotation. We should apply this matrix to viewport coordinates to render videoTexture in the viewport,
    • <HTMLVideoElement> videoElement: the video used as source for the webgl texture videoTexture,
    • <int> maxFacesDetected: the maximum number of detected faces.

    The detection state

    At each render iteration a callback function is executed ( callbackTrack in the code snippet ). It has one argument ( detectState ) which is a dictionnary with these properties:

    • <float> detected: the face detection probability, between 0 and 1,
    • <float> x, <float> y: The 2D coordinates of the center of the detection frame in the viewport (each between -1 and 1, x from left to right and y from bottom to top),
    • <float> s: the scale along the horizontal axis of the detection frame, between 0 and 1 (1 for the full width). The detection frame is always square,
    • <float> rx, <float> ry, <float> rz: the Euler angles of the head rotation in radians.
    • <Float32Array> expressions: array listing the facial expression coefficients:
      • expressions[0]: mouth opening coefficient (0 → mouth closed, 1 → mouth fully opened)

    In multiface detection mode, detectState is an array. Its size is equal to the maximum number of detected faces and each element of this array has the format described just before.

    Miscellaneous methods

    After the initialization (ie after that callbackReady is launched ) , these methods are available:

    • JEELIZFACEFILTER.resize(): should be called after resizing the <canvas> element to adapt the cut of the video. It should also be called if the device orientation is changed to take account of new video dimensions,

    • JEELIZFACEFILTER.toggle_pause(<boolean> isPause, <boolean> isShutOffVideo): pause/resume. This method will completely stop the rendering/detection loop. If isShutOffVideo is set to true, the media stream track will be stopped and the camera light will turn off. It returns a Promise object,

    • JEELIZFACEFILTER.toggle_slow(<boolean> isSlow): toggle the slow rendering mode: because this library can consume a lot of GPU resources, it may slow down other elements of the application. If the user opens a CSS menu for example, the CSS transitions and the DOM update can be slow. With this function you can slow down the rendering in order to relieve the GPU. Unfortunately the tracking and the 3D rendering will also be slower but this is not a problem is the user is focusing on other elements of the application. We encourage to enable the slow mode as soon as a the user's attention is focused on a different part of the canvas,

    • JEELIZFACEFILTER.set_animateDelay(<integer> delay): Change the animateDelay (see init() arguments),

    • JEELIZFACEFILTER.set_inputTexture(<WebGLTexture> tex, <integer> width, <integer> height): Change the video input by a WebGL Texture instance. The dimensions of the texture, in pixels, should be provided,

    • JEELIZFACEFILTER.reset_inputTexture(): Come back to the user's video as input texture,

    • JEELIZFACEFILTER.get_videoDevices(<function> callback): Should be called before the init method. 2 arguments are provided to the callback function:

      • <array> mediaDevices: an array with all the devices founds. Each device is a javascript object having a deviceId string attribute. This value can be provided to the init method to use a specific camera. If an error happens, this value is set to false,
      • <string> errorLabel: if an error happens, the label of the error. It can be: NOTSUPPORTED, NODEVICESFOUND or PROMISEREJECTED.
    • JEELIZFACEFILTER.set_scanSettings(<object> scanSettings): Override scan settings. scanSettings is a dictionnary with the following properties:

      • <float> scale0Factor: Relative width (1 -> full width) of the searching window at the largest scale level. Default value is 0.8,
      • <int> nScaleLevels: Number of scale levels. Default is 3,
      • [<float>, <float>, <float>] overlapFactors: relative overlap according to X,Y and scale axis between 2 searching window positions. Higher values make scan faster but it may miss some positions. Set to [1, 1, 1] for no overlap. Default value is [2, 2, 3],
      • <int> nDetectsPerLoop: specify the number of detection per drawing loop. -1 for adaptative value. Default: -1
      • <boolean> enableAsyncReadPixels : enable asynchronous GPU reading. Default is false. It will free a lot of CPU resource but it may add latency on some devices
    • JEELIZFACEFILTER.set_stabilizationSettings(<object> stabilizationSettings): Override detection stabilization settings. The output of the neural network is always noisy, so we need to stabilize it using a floatting average to avoid shaking artifacts. The internal algorithm computes first a stabilization factor k between 0 and 1. If k==0.0, the detection is bad and we favor responsivity against stabilization. It happens when the user is moving quickly, rotating the head or when the detection is bad. On the contrary, if k is close to 1, the detection is nice and the user does not move a lot so we can stabilize a lot. stabilizationSettings is a dictionnary with the following properties:

      • [<float> minValue, <float> maxValue] translationFactorRange: multiply k by a factor kTranslation depending on the translation speed of the head (relative to the viewport). kTranslation=0 if translationSpeed<minValue and kTranslation=1 if translationSpeed>maxValue. The regression is linear. Default value: [0.002, 0.005],
      • [<float> minValue, <float> maxValue] rotationFactorRange: analogous to translationFactorRange but for rotation speed. Default value: [0.015, 0.1],
      • [<float> minValue, <float> maxValue] qualityFactorRange: analogous to translationFactorRange but for the head detection coefficient. Default value: [0.9, 0.98],
      • [<float> minValue, <float> maxValue] alphaRange: it specify how to apply k. Between 2 successive detections, we blend the previous detectState values with the current detection values using a mixing factor alpha. alpha=<minValue> if k<0.0 and alpha=<maxValue> if k>1.0. Between the 2 values, the variation is quadratic. Default value: [0.05, 1.0].
    • JEELIZFACEFILTER.update_videoElement(<video> vid, <function|False> callback): change the video element used for the face detection (which can be provided via VIDEOSETTINGS.videoElement) by another video element. A callback function can be called when it is done.

    • JEELIZFACEFILTER.update_videoSettings(<object> videoSettings): dynamically change the video settings (see Optional init arguments for the properties of videoSettings). It is useful to change the camera from the selfie camera (user) to the back (environment) camera. A Promise is returned.

    • JEELIZFACEFILTER.set_videoOrientation(<integer> angle, <boolean> flipX): Dynamically change videoSettings.rotate and videoSettings.flipX. This method should be called after initialization. The default values are 0 and false. The angle should be chosen among these values: 0, 90, 180, -90,

    • JEELIZFACEFILTER.destroy(): Clean both graphic memory and JavaScript memory, uninit the library. After that you need to init the library again. A Promise is returned,

    • JEELIZFACEFILTER.reset_GLState(): reset the WebGL context,

    • JEELIZFACEFILTER.render_video(): render the video on the <canvas> element.


    1 or 2 Canvas?

    You can either:

    1. Use 1 <canvas> with 1 WebGL context, shared by facefilter and THREE.js (or another 3D engine),
    2. Use 2 separate <canvas> elements, aligned using CSS, 1 canvas for AR, and the second one to display the video and to run this library.

    The 1. is often more efficient, but the newest versions of THREE.js are not suited to share the WebGL context and some weird bugs can occur. So I strongly advise to use 2 separate canvas.

    Canvas and video resolutions

    We strongly recommend the use of the JeelizResizer helper in order to size the canvas to the display size in order to not compute more pixels than required. This helper also computes the best camera resolution, which is the closer to the canvas actual size. If the camera resolution is too high compared to the canvas resolution, your application will be unnecessarily slowed because it is quite costly to refresh the WebGL texture for each video frame. And if the video resolution is too low compared to the canvas resolution, the image will be blurry. You can take a look at the THREE.js boilerplate to see how it is used. To use the helper, you first need to include it in the HTML code:

    <script src=""></script>

    Then in your main script, before initializing Jeeliz FaceFilter, you should call it to size the canvas to the best resolution and to find the optimal video resolution:

      canvasId: 'jeeFaceFilterCanvas',
        callback: function(isError, bestVideoSettings){
            videoSettings: bestVideoSettings,
            // ...
            // ...

    Take a look at the source code of this helper (in helpers/JeelizResize.js) to get more information.


    A few tips:

    • In term of optimisation, the WebGL based demos are more optimized than Canvas2D demos, which are still more optimized than CSS3D demos.
    • Try to use lighter resources as possibles. Each texture image should have the lowest resolution as possible, use mipmapping for texture minification filtering.
    • The more effects you use, the slower it will be. Add the 3D effects gradually to check that they do not penalize too much the frame rate.
    • Use low polygon meshes.

    Multiple faces

    It is possible to detect and track several faces at the same time. To enable this feature, you only have to specify the optional init parameter maxFacesDetected. Its maximum value is 8. Indeed, if you are tracking for example 8 faces at the same time, the detection will be slower because there is 8 times less computing power per tracked face. If you have set this value to 8 but if there is only 1 face detected, it should not slow down too much compared to the single face tracking.

    If multiple face tracking is enabled, the callbackTrack function is called with an array of detection states (instead of being executed with a simple detection state). The detection state format is still the same.

    You can use our Three.js multiple faces detection helper, helpers/JeelizThreeHelper.js to get started and test this example. The main script has only 60 lines of code !

    Multiple videos

    To create a new JEELIZFACEFILTER instance, you need to call:


    Be aware that:

    • Each instance uses a new WebGL context. Depending on the configuration, the number of WebGL context is limited. We advise to not use more than 16 contexts simultaneously,
    • The computing power will be shared between the context. Using multiple instances may increase the latency.

    Checkout this demo to have an example of how it works: source code, live demo

    Changing the 3D engine

    It is possible to use another 3D engine than BABYLON.JS or THREE.JS. If you have accomplished this work, we would be interested to add your demonstration in this repository (or link to your code). Just open a pull request.

    The 3D engine can either share the WebGL context and the canvas with FaceFilter, or use a second canvas overlaid on the FaceFilter canvas (the FaceFilter canvas is just used to render the video). In the first case, the WebGL context is created by Jeeliz Face Filter. We strongly encourage the second approach, even if the first one may be a bit more optimized.

    Changing the neural network

    Since July 2018 it is possible to change the neural network. When calling JEELIZFACEFILTER.init({...}) with NNCPath: <path of NN_DEFAULT.json> you set NNCPath value to a specific neural network file:

        NNCPath: '../../neuralNets/NN_LIGHT_1.json'
        // ...

    It is also possible to give directly the neural network model JSON file content by using NNC property instead of NNCPath.

    We provide several neural network models:

    • neuralNets/NN_DEFAULT.json: this is the default neural network. Good tradeoff between size and performances,
    • neuralNets/NN_WIDEANGLES_<X>.json: this neural network is better to detect wide head angles (but less accurate for small angles),
    • neuralNets/NN_LIGHT_<X>.json: this is a light version of the neural network. The file is twice lighter and it runs faster but it is less accurate for large head rotation angles,
    • neuralNets/NNC_VERYLIGHT_<X>.json: even lighter than the previous version: 250Kbytes, and very fast. But not very accurate and robust to all lighting conditions,
    • neuralNets/NN_VIEWTOP_<X>.json: this neural net is perfect if the camera has a bird's eye view (if you use this library for a kiosk setup for example),
    • neuralNets/NN_INTEL1536.json: neural network working with Intel 1536 Iris GPUs (there is a graphic driver bug, see #85),
    • neuralNets/NN_4EXPR_<X>.json: this neural network also detects 4 facial expressions (mouth opening, smile, frown eyebrows, raised eyebrows).

    Using module

    /dist/jeelizFaceFilter.module.js is exactly the same as /dist/jeelizFaceFilter.js except that it works as a JavaScript module, so you can import it directly using:

    import 'dist/jeelizFaceFilter.module.js'

    or using require (see issue #72):

    const faceFilter = require('./lib/jeelizFaceFilter.module.js').JEELIZFACEFILTER;
      // you can also provide the canvas directly
      // using the canvas property instead of canvasId:
      canvasId: 'jeeFaceFilterCanvas',
      NNCPath: '../../../neuralNets/', // path to JSON neural network model (NN_DEFAULT.json by default)
      callbackReady: function(errCode, spec){
        if (errCode){
          console.log('AN ERROR HAPPENS. ERROR CODE =', errCode);
        // [init scene with spec...]
        console.log('INFO: JEELIZFACEFILTER IS READY');
      }, //end callbackReady()
      // called at each render iteration (drawing loop)
      callbackTrack: function(detectState){
          // Render your scene here
          // [... do something with detectState]
      } //end callbackTrack()


    With a bundler

    If you use this library with a bundler (typically Webpack or Parcel), first you should use the module version.

    Then, with the standard library, we load the neural network model (specified by NNCPath provided as initialization parameter) using AJAX for the following reasons:

    • If the user does not accept to share its camera, or if WebGL is not enabled, we don't have to load the neural network model,
    • We suppose that the library is deployed using a static HTTPS server.

    With a bundler, it is a bit more complicated. It is easier to load the neural network model using a classical import or require call and to provide it using the NNC init parameter:

    const faceFilter = require('./lib/jeelizFaceFilter.module.js').JEELIZFACEFILTER
    const neuralNetworkModel = require('./neuralNets/NN_DEFAULT.json')
      NNC:  neuralNetworkModel, // instead of NNCPath
      // ... other init parameters

    You can check out the amazing work of @jackbilestech, jackbilestech/jeelizFaceFilter if you are interested to use this library in a NPM / ES6 / Webpack environment.

    With JavaScript frontend frameworks

    With REACT and THREE Fiber

    Since October 2020, there is a React/THREE Fiber/Webpack boilerplate in /reactThreeFiberDemo path.

    See also

    We don't officially cover here integration with mainstream JavaScript frontend frameworks (React, Vue, Angular). Feel free to submit a Pull Request to add a boilerplate or a demo for a specific framework. Here is a bunch of submitted issues dealing with React integration:

    You can also take a look at these Github code repositories:


    It is possible to execute a JavaScript application using this library into a Webview for a native app integration. For IOS the camera access is disabled inside WKWebview component for IOS before IOS14.3. If you want to make your application run on devices running IOS <= 14.2, you have to implement a hack to stream the camera video into the webview using websockets.

    His hack has not been implemented into this repository but in a similar Jeeliz Library, Jeeliz Weboji. Here are the links:

    But it is still a dirty hack introducing a bottleneck. It still run pretty well on a high end device (tested on Iphone XR), but it is better to stick on a full web environment.

    There is also this Github issue detailing how to embed the library into a Webview component, for React native. It is for Android only:


    Since September 2023, Marks has developed a Unity plugin to create Face filters using Unity and export them for the web. You can buy it on the Unity asset store here: Augmented Reality WebGL - Face Tracking Virtual Try On


    This library requires the user's camera video feed through MediaStream API. So your application should be hosted by a HTTPS server (even with a self-signed certificate). It won't work at all with unsecure HTTP, even locally with some web browsers.

    The development server

    For development purpose we provide a simple and minimalist HTTPS server in order to check out the demos or develop your very own filters. To launch it, execute in the bash console:

    with phython2


    It requires Python 2.X. Then open in your web browser https://localhost:4443.

    with node

      npm install
      npm run dev

    go to

    when you open the browser it will show not secure. Go to advance. Click proceed.

    Hosting optimization

    You can use our hosted and up to date version of the library, available here:

    It uses the neural network NN_DEFAULT.json hosted in the same path. The helpers used in these demos (all scripts in /helpers/) are also hosted on

    It is served through a content delivery network (CDN) using gzip compression. If you host the scripts by yourself, be careful to enable gzip HTTP/HTTPS compression for JSON and JS files. Indeed, the neuron network JSON file, neuralNets/NN_DEFAULT.json is quite heavy, but very well compressed with GZIP. You can check the gzip compression of your server here.

    The neuron network file, neuralNets/NN_DEFAULT.json is loaded using an ajax XMLHttpRequest after calling JEEFACEFILTER.init(). This loading is proceeded after the user has accepted to share its camera. So we won't load this quite heavy file if the user refuses to share it or if there is no camera available. The loading can be faster if you systematically preload neuralNets/NN_DEFAULT.json using a service worker or a simple raw XMLHttpRequest just after the HTML page loading. Then the file will be already in the browser cache when Jeeliz Facefilter will request it.

    About the tech

    Under the hood

    This library relies on Jeeliz WebGL Deep Learning technology to detect and track the user's face using a neural network. The accuracy is adaptative: the best is the hardware, the more detections are processed per second. All is done client-side.


    • If WebGL2 is available, it uses WebGL2 and no specific extension is required,
    • If WebGL2 is not available but WebGL1, we require either OES_TEXTURE_FLOAT extension or OES_TEXTURE_HALF_FLOAT extension,
    • If WebGL2 is not available, and if WebGL1 is not available or neither OES_TEXTURE_FLOAT or OES_HALF_TEXTURE_FLOAT are implemented, the user is not compatible.

    If a compatibility error occurred, please post an issue on this repository. If this is a problem with the camera access, please first retry after closing all applications which could use the camera (Skype, Messenger, other browser tabs and windows, ...). Please include:

    • the browser, the version of the browser, the operating system, the version of the operating system, the device model and the GPU if it is a desktop computer,
    • a screenshot of - WebGL1 (about your WebGL1 implementation),
    • a screenshot of - WebGL2 (about your WebGL2 implementation),
    • the log from the web console,
    • the steps to reproduce the bug, and screenshots.

    Articles and tutorials

    You have written a tutorial using this library? Submit a pull request or send us the link, we would be glad to add it.

    In English

    In French

    In Japanese


    Apache 2.0. This application is free for both commercial and non-commercial use.


    Show All