3D Scanning vs. 3D Modeling: Two Methods with Different Strengths
What's behind 3D Modeling and 3D Scanning? What are the differences and connections?

If you want to create professional product images and product videos for marketing, you probably first think of a photo or video shoot.
However, there is an alternative that is being used more and increasingly often: 3D visualization.
And it's significantly less complex and costly than you might think.
When looking for such providers, two terms repeatedly pop up: 3D Scan and 3D Model.
What's behind them? How do these two terms and solutions differ? All of this will be clarified in this post.
What alternatives to photoshoots are there for professional product images?
The methods for creating images and videos of products using 3D technology can be divided into two areas.
The two most common methods are 3D scanning and 3D modeling.
The distinction isn't always clear, as 3D modeling can also be based on a 3D scan, depending on its use.
And within 3D scanning itself, there are several different ways this works.
This becomes clear when you take a closer look at the functions of the methods.

3D Scanning:
In 3D scanning, a physical object is automatically scanned, as the name suggests. This is done with specially developed camera and laser technologies.
Two widely used methods for creating 3D scans are:
- Scanning using LiDAR:
LiDAR stands for Light Detection and Ranging, which loosely translates to: light detection and distance measurement. It is also known as 3D laser scanning, as this is precisely the basis of its function: laser waves are used to measure the distances from the camera to objects and their many details.
LiDAR then outputs so-called point clouds of an object or entire buildings and landscapes, which result in a 3D structure.
However, these point clouds alone are not sufficient to achieve photorealistic images or product images. Therefore, the method is often combined with photogrammetry.
- Scanning using Photogrammetry:
Photogrammetry creates three-dimensional images from two-dimensional videos or photographs. More precisely, these recordings, which fundamentally lack distance information, are transformed into 3D scans of surfaces by software or by superimposing them with LiDAR recordings.
For visualizing products using these two techniques, one can say:
- Photogrammetry can independently create product images as 3D scans through additional software.
- LiDAR technology can improve these results but is not designed for product visualization on its own.
3D Modeling:
In 3D modeling, a fundamentally different approach is used.
- Modeling using 3D Software:
In 3D modeling, 3D designers digitally recreate the product 1:1 using specialized software applications.
Modeling represents the first step to ultimately obtain a photorealistic image, video, animation, or virtual application.
Modeling involves creating a three-dimensional model of the product with all geometric details. This can then be designed and staged very flexibly and individually.
How does 3D scanning work for a product image?
For this post, we focus on photogrammetry as a 3D scanning method, as it is the most commonly used application for directly creating product images or as a basis for 3D modeling, which then creates product images.
How photogrammetry works for product images
Depending on the provider and the customer's requirements, the production process can vary.
However, the basic workflow of photogrammetry for creating product images can be divided into these individual steps:
- Standardized capture of many photos: Multiple photos of the object or scene are taken from different angles. These photos must have overlapping areas so that the software can assign corresponding points. Depending on product size, details, or quality requirements, between 20 and 100 photos are taken per product.
- Software identifies matching features: Special software then analyzes all images to find common points between them. By comparing the positions of these points in the different photos, the software can begin to create a 3D representation.
- Triangulation: Using the known properties of the camera (e.g., focal length) and the position of the identified points in the images, the software calculates each point's distance from the camera. This is called triangulation and helps create the basic structure of the 3D model.
- Generation of a point cloud: The calculated 3D positions of the points form a "point cloud," i.e., a collection of 3D coordinates in space. This point cloud represents the basic shape and structure of the object.
- Generation of a mesh: The point cloud is then used to create a mesh, which represents a surface composed of interconnected triangles (polygons). This mesh represents the surface geometry of the object. More points and higher-resolution images contribute to a more detailed and accurate mesh.
- Texture mapping: The original photos are now used to texture the digitally created mesh. Each point of the mesh is re-projected onto the original images, and the corresponding color information is applied to create the most realistic appearance possible.
- Refinement: The generated mesh can have imperfections, holes, or inaccuracies. In further steps, the virtual model can be cleaned up. This is done manually by 3D designers and with appropriate software.
- Export: Once the 3D model is satisfactorily generated, it can be exported to various formats suitable for visualization, animation, or further processing.
How accurate are 3D scans?
The accuracy and quality of 3D models created using photogrammetry depend heavily on the individual process steps, particularly:
- the quality of the photos
- a professional, consistent setting
- professional lighting setup
- overlap between images
- camera calibration
In combination with other technologies such as laser scanning, photogrammetry can achieve higher accuracy and detail.
Current developments in 3D scans
An exciting development in this area is the so-called NeRF technology.
NeRF (Neural Radiance Field) aims to increase the quality of 3D scans while requiring fewer images as a starting point.
Here, AI is intended to help close corresponding gaps and inaccuracies in the generated 3D models. Neural networks are designed to complete unseen (not photographically captured) aspects using previously machine-learned visual knowledge.
This is an exciting development that currently does not yet yield high-quality results. However, it is possible that in a few years, it will be even easier to create 3D models and visualizations using such methods.
How does 3D modeling work for product images?
The process of creating a 3D model using appropriate software involves several distinct steps. These may vary slightly depending on the requirements.
A big advantage: The product doesn't even have to exist yet or be physically available.
The basis can be a reference photo, dimensions, or existing 3D data in the form of, for example, CAD or STEP files.
This allows for relatively simple and efficient creation of a CAD visualization, for example.
The following steps are necessary if a product is to be visualized using 3D modeling without existing 3D data.
1. Mesh - creating the framework
Creating basic shapes: The modeling process often begins with creating basic shapes such as cubes, spheres, cylinders, and planes. These basic shapes serve as building blocks for more complex objects.
Mesh editing: Once the basic shape(s) have been created, they can be modified and transformed using tools such as scaling, rotating, and moving. This is the fundamental step for designing a 3D object.
Editing vertices, edges, and faces: In advanced modeling, individual vertices (points), edges (lines connecting vertices), and faces (surfaces created by connecting edges) can be edited. This allows for precise control over the geometry and, above all, additional details that arise with more complex objects.
Extrusion and Beveling: These are techniques to add depth and dimension to flat shapes. Extrusion involves pulling a 2D shape in a specific direction to create a 3D object, while beveling adds rounded edges. This is also a way to realize further details of objects.
2. Texturing: Bringing surfaces to the framework
UV Mapping: For textures to be applied to a 3D model, the model's surface must be "unwrapped" into a 2D representation called UV mapping.
You can imagine this process as follows: If you unwrap a chocolate Santa Claus and lay the aluminum foil flat, you get the Santa Claus's texture in 2D as a so-called UV map.
The letters "U" and "V" denote the two axes of the 2D texture, as "X," "Y," and "Z" are already used to denote the axes of the 3D object in model space.
Texturing: Applying textures involves adding images or patterns to the surfaces of the model. These textures can include color, bump maps (to simulate unevenness and wrinkles), and more.
Assigning material: Assigning materials defines how light interacts with surfaces, including color, gloss, transparency, and reflection.
3. Rendering: The 3D model is "rendered" into the final content format
Rendering: Rendering creates a final image or animation from the 3D model. This includes simulating lighting, shadows, and materials to achieve a realistic result.
Export: Once the model is properly staged, it can be exported in various formats suitable for different purposes, such as games, animations, or static, two-dimensional product images.
What is a 3D model?
A 3D model is a digital representation of an object or scene created with specialized computer software. Unlike traditional 2D models, 3D models have depth, allowing the user to perceive the object or environment from multiple perspectives. They accurately mimic the physical properties and geometry of their real-world counterparts.
Many generally understand a 3D model to also mean the final laid-out result of a 3D visualization, complete with surface texture, lighting, and color.
However, in technical circles, a 3D model refers only to the geometry, i.e., the pure framework or object of an item. Attributes such as form, texture, color, and lighting are realized later in the shading process.
3D Scanning vs. 3D Modeling Comparison
Anyone familiar with the individual capabilities and functions of 3D scanning and 3D modeling methods has probably already recognized the differences.
Here is a brief overview of the pros and cons of 3D scanning based on photogrammetry versus 3D modeling:
| Pro | Con | |
|---|---|---|
| 3D Scan (Photogrammetry): | fast results low effort ideal for large-scale geographies and traffic | requires physical product limited detail and quality requires setting and light |
| 3D Model: | easily enables other content formats (video, animation, AR, VR) scalable content production through reuse no physical product necessary | initially more effort requires 3D experts |
A clear example of the differences between 3D Scan and 3D Model:
Imagine a very simple, sharp-edged cube with 6 identical sides.
As a computer-generated 3D model, this cube consists of only 6 polygons, i.e., 6 simple geometries that are perfectly precise.
As a 3D scan, the cube would consist of a multitude of scanned points. A lot of information and inaccuracies, as even a smooth surface consists of many points, which in turn would also show height differences on the surface.
How we at RenderThat use 3D Scanning and 3D Modeling
Conclusion: For many products and demands, 3D scanning alone is not enough
Unfortunately, there are currently no sufficient methods to create good digital reconstructions of products using only the 3D scan process without modeling, which would enable high-quality renderings or even animations.
NeRF technology could be a future opportunity to quickly "scan" products and render them from various perspectives.
Until then, for applications like animations and high-quality, reusable 3D models, modeling remains unsurpassed.
For product images, it is up to the user's requirements and wishes to decide which production method to choose.
However, the 3D scanning process is primarily used in the logistics and transportation industries, as well as for geographical and meteorological measurements. Additionally, the 3D printing industry and hobby 3D printers use 3D scans to create simple replicas of objects.
daniel erning
