# Convert Any Image into a Hologram

{% embed url="<https://youtu.be/n4cGL-sRfe8>" %}

{% hint style="info" %}
Casting requires [***Looking Glass Bridge 2.3***](https://look.glass/bridge) or newer, make sure that Bridge 2.3 is running, and your Looking Glass is ready in [***desktop mode***](https://lfdocs.lookingglassfactory.com/getting-started/portrait/get-started-with-looking-glass-portrait#desktop-mode)***.*** \
\
Need help with Bridge? Check out this page [here](https://lfdocs.lookingglassfactory.com/software/looking-glass-bridge/using-looking-glass-bridge).
{% endhint %}

With Looking Glass Blocks you can now upload any 2D image and convert that into an awesome hologram to share, cast, or view on any internet connected device.&#x20;

## What images work best?

Photos with good contrast, high detail and lots of objects will come out great. Here are some examples of our favorites:&#x20;

{% tabs %}
{% tab title="Selfies" %}
{% embed url="<https://blocks.glass/embed/6743?theme=dark&aspect=1.75>" %}
Bryan & Antonia at the Unity XR meetup in NY!
{% endembed %}

{% endtab %}

{% tab title="Illustrated Content" %}
{% embed url="<https://blocks.glass/bryanlkg/8956?theme=dark&aspect=1.75>" %}
The Holorama Hall, a vision of the holographic future!
{% endembed %}
{% endtab %}

{% tab title="Close-up Nature" %}
{% embed url="<https://blocks.glass/bryanlkg/7136?theme=dark&aspect=1.75>" %}
Photo Credit: Anastasiia Malai <https://unsplash.com/photos/8UAarkxtHfA>
{% endembed %}

{% endtab %}

{% tab title="AI generated Images" %}
{% embed url="<https://blocks.glass/embed/9073>" %}
{% endtab %}
{% endtabs %}

## Uploading an image

Once you're signed in on <https://blocks.glass>, click the `+` in the top right corner. Then choose `3D Image` from the create a hologram menu!

If you're brand new to Blocks you'll see a few other options to guide you through the platform.

<figure><img src="https://content.gitbook.com/content/PuCaeVAli72TiclYlEG5/blobs/JGlmLI1fWbd9TMLNhCb0/image.png" alt=""><figcaption><p>3D Images is the option you'll want to select here, you can learn more about light fields on the next page.</p></figcaption></figure>

After you've clicked 3D Image, drag and drop any supported image format onto the window, and you'll have a hologram within seconds!&#x20;

<div data-full-width="true"><figure><img src="https://content.gitbook.com/content/PuCaeVAli72TiclYlEG5/blobs/HQVgfI3rFHjcRq6gYTwx/image.png" alt=""><figcaption><p>The upload modal in blocks, drag &#x26; drop your images and you'll have holograms in a jiffy!</p></figcaption></figure></div>

Once you've got your image uploaded, it'll take a few seconds to convert.&#x20;

## Casting your hologram

To view your hologram in your Looking Glass, make sure you have [**Bridge 2.3**](https://look.glass/bridge) or later running on your computer and that your Looking Glass is connected to your computer in [**desktop mode**](https://lfdocs.lookingglassfactory.com/getting-started/portrait/get-started-with-looking-glass-portrait#desktop-mode). Once you've got that set up, click the cast button and you'll see your hologram in your Looking Glass! :sparkles:

## Editing your hologram

There are three options to edit your hologram, **Depthiness, Focus,** and **Zoom**

* **Depthiness** changes how much depth your photo has.&#x20;
* **Focus** changes how it appears in a Looking Glass.
* **Zoom** changes how much your photo is zoomed in.

<figure><img src="https://content.gitbook.com/content/PuCaeVAli72TiclYlEG5/blobs/494APOfPG7qK4MkkKGKd/image.png" alt=""><figcaption><p>A screenshot showing the edit interface for 3D Images.</p></figcaption></figure>

## Downloading your hologram

You can download your hologram by clicking the `Download Assets` button on the edit, manage or hologram pages and choosing the RGB-D option. To view your downloaded hologram, you can open it in [Looking Glass Studio](https://lfdocs.lookingglassfactory.com/software/looking-glass-studio).

{% hint style="info" %}
If you want to allow other users to download your hologram, click the *enable downloads* button!
{% endhint %}

<figure><img src="https://content.gitbook.com/content/PuCaeVAli72TiclYlEG5/blobs/iAOdxeEciOxluuKYyvfQ/image.png" alt=""><figcaption><p>The Edit panel for your hologram.</p></figcaption></figure>

<figure><img src="https://content.gitbook.com/content/PuCaeVAli72TiclYlEG5/blobs/gOrAn9Osq0Ngh2nxD07I/image.png" alt=""><figcaption><p>You can download the Source Image, RGB-D Pair or individual depth map via the <em>download assets panel. RGB-D is what you'll want to use with Looking Glass Studio.</em></p></figcaption></figure>

## Privacy settings

Toggle between:&#x20;

* **Public** (anyone who visits your link will be able to see your hologram)
* **Unlisted** (only people you share your link with will be able to see your hologram)&#x20;
* **Private** (no one but you will be able to see your hologram)

## Convert with other AI tools — Distill Any Depth

While Looking Glass Blocks provides the easiest way to convert images to holograms, you can also create RGB-D images using other AI depth estimation tools and view them in Looking Glass Studio. [Distill Any Depth](https://huggingface.co/spaces/xingyang1/Distill-Any-Depth) is an excellent option for depth map generation with particularly good results for complex scenes.

{% hint style="info" %}
While we recommend using Blocks or Distill Any Depth, there are other options. [Depth Anything V2](https://github.com/DepthAnything/Depth-Anything-V2) is an older but still very good converter; [Owl3D](https://www.owl3d.com/) provides a user-friendly app for conversions.
{% endhint %}

### Generate depth in Hugging Face

Hugging Face provides a user-friendly web interface, allowing you to upload your image and generate depth without any setup required. However, you'll have limited generations unless you have a paid account.

{% hint style="warning" %}
Hugging Face deployments can be taken down, resulting in the below web links to no longer be valid. In this case, it's recommended to either use the model locally or use another conversion option like [Blocks](#uploading-an-image) or [Owl3D](https://www.owl3d.com/).
{% endhint %}

Access Distill Any Depth on Hugging Face directly via the embed below:

{% embed url="<https://huggingface.co/spaces/xingyang1/Distill-Any-Depth>" %}

If the embed doesn't work, [access Distill Any Depth here](https://huggingface.co/spaces/xingyang1/Distill-Any-Depth). If that deployment is down, there is an alternate deployment available [here](https://huggingface.co/spaces/vergacitas/Distill-Any-Depth).

Once you have accessed the tool:

* Upload your image and select "Submit"
* The model will generate a high-quality depth map
* Download both the original image and the **grayscale** depth map

### Generate depth locally

Alternately, you can install Distill Any Depth locally.&#x20;

{% hint style="warning" %}
The below instructions are taken from the official GitHub repo and are up to date as of July 1st, 2025. For latest instructions, [see the GitHub repo readme](https://github.com/Westlake-AGI-Lab/Distill-Any-Depth?tab=readme-ov-file#getting-started).
{% endhint %}

To start, set up a virtual environment (requires [Python](https://www.python.org/downloads/) and [Anaconda](https://www.anaconda.com/download) or [Miniconda](https://www.anaconda.com/docs/getting-started/miniconda/main)) and install the package, with dependencies, using this command:

```
# Create a new conda environment with Python 3.10
conda create -n distill-any-depth -y python=3.10

# Activate the created environment
conda activate distill-any-depth

# Install the required Python packages
pip install -r requirements.txt

# Navigate to the Detectron2 directory and install it
cd detectron2
pip install -e .

cd ..
pip install -e .
```

Then, run their helper script to run the model on an image:

```
# Run prediction on a single image using the helper script
source scripts/00_infer.sh
# or use bash
bash scripts/00_infer.sh
```

For more instructions, [see the official GitHub repo](https://github.com/Westlake-AGI-Lab/Distill-Any-Depth?tab=readme-ov-file#getting-started).

### Combine color and depth into one image

* Use any image editor (Photoshop, GIMP, or even online tools)
* Create a new canvas that's twice the width of your original image
* Place your original color image on the left side
* Place the grayscale depth map on the right side, as in the image below

<figure><img src="https://content.gitbook.com/content/PuCaeVAli72TiclYlEG5/blobs/FggEfUxP7zrIQr1PdayH/NikkiDepthMapSample.JPG" alt="" width="375"><figcaption></figcaption></figure>

* Save as a single PNG or JPG file

### Load into Looking Glass Studio

* Open [Looking Glass Studio](https://look.glass/studio)
* Drag your RGB-D image into the application
* Select "RGB-D Photo" when prompted
* Adjust Depthiness and Focus settings to perfect your hologram

{% hint style="info" %}
For instructions on how to load your image into Looking Glass Studio for iOS, see our guide [here](https://lfdocs.lookingglassfactory.com/software/looking-glass-studio/studio-for-ios#loading-rgbd-images-and-videos).
{% endhint %}

## Which tool should you use?

* **Looking Glass Blocks**: Free and easy to use, providing automatic optimization and sharing
* **Distill Any Depth**: Excellent for complex scenes and highly accurate edges, but less user-friendly

All methods produce RGB-D images that can be viewed on your Looking Glass display, so choose based on your workflow preferences!


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://lfdocs.lookingglassfactory.com/community/convert-any-image-into-a-hologram.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
