3D Technologies

Improving Development Efficiency through 3D Simulation and Visualization
We introduce SANEI HYTECHS' R&D in 3D technologies, encompassing simulation, visualization, and data generation.

SANEI HYTECHS is driving the research and development of 3D technologies that support the evolution of robotics and AI. Targeting fields such as Model-Based Development (MBD) and smart agriculture, we are working on 3D simulation, visualization using 3DCG, and 3D data generation and processing technologies for outdoor environments.

R&D of 3D Technologies to Enhance Development Efficiency and Verification Accuracy

Importance of 3D Simulation and Visualization in R&D and Their Specific Benefits
Our R&D department has tackled a wide range of themes targeting real-world applications, including robotics, AI development, Model-Based Development (MBD), and smart agriculture. A common and vital factor in streamlining R&D and enhancing verification accuracy across these fields is the integration of "3D simulation" and "visualization using 3DCG."

In the development of robotics and AI, verifying operations with physical machines requires significant time and cost. By conducting simulations in a 3D virtual space, we can verify performance while replicating various conditions and scenarios, allowing for the rapid and easy identification of issues and improvements.

In Model-Based Development (MBD), simulation also plays a core role in the development workflow. By expanding the scope of simulation into 3D virtual spaces, we can gain a comprehensive view of overall product behavior, realizing more efficient impact analysis and verification during design changes.

Furthermore, visualizing simulation results through 3DCG allows for the intuitive confirmation of whether robots, AI, and software are functioning as intended. Whether it is analyzing crop layout and autonomous harvester paths in smart agriculture, or sharing the status of cities, rivers, and forests in disaster simulations, 3DCG visualization fosters understanding among stakeholders and supports decisive action.

We are committed to the research and development of 3D technologies and the commercialization of these results, specifically for the purposes of 3D simulation and 3DCG visualization.

3DCG Generation for Outdoor Environments

Efficient Large-Scale Virtual Space Generation Using Open Data and 3D Data Processing Algorithms
To verify the movements of outdoor robots and drones via 3D simulation, it is essential to have a 3D virtual space that replicates real-world environments, including terrain and buildings. However, creating 3DCG for expansive outdoor areas manually requires immense effort. To address this, Sanei Hytechs is developing a system that efficiently generates 3DCG data compatible with game engines like Unity by leveraging open data from projects such as "VIRTUAL SHIZUOKA" and "PLATEAU." This system consists of proprietary algorithms for high-quality, efficient data generation and an IT infrastructure powered by AWS and other cloud services. The following figure shows an example of visualization in Unity. The generated 3DCG data can be easily integrated into our in-house "Robot Scenario Simulator," enabling seamless operational verification for outdoor robots and drones.

Visualization Examples: VIRTUAL SHIZUOKA & PLATEAU Project

PLATEAU Project (Hamamatsu Station Area)
PLATEAU Project (Hamamatsu Station Area)
PLATEAU Project (Maebashi Station South Exit)
PLATEAU Project (Maebashi Station South Exit)
VIRTUAL SHIZUOKA (Kunozan Area)
VIRTUAL SHIZUOKA (Kunozan Area)

Robot Scenario Simulator (RSS)

Operate "The Field" from Your Office, Without Stepping On-Site.
The "Robot Scenario Simulator" provided by SANEI HYTECS uses digital twin technology to recreate real-world fields in a CG environment. Since you can upload your existing ROS-based programs as they are, visual operational verification is possible before ever running the actual machine. This reduces travel costs and enables repeated trial and error on your desktop until you are fully satisfied. First, please check the actual usability and features through the trial version below.
Robot Scenario Simulator - Free Trial Version [URL]

Open-World Technology

Technology for Seamlessly Integrating Segmented Virtual Space Data
3DCG data for expansive outdoor environments results in extremely large file sizes. For example, the data for VIRTUAL SHIZUOKA exceeds 30TB, and even with aggressive optimization for weight reduction, it rarely falls below several hundred gigabytes. To display a virtual space on a computer, 3DCG data must be loaded into memory; however, it is unrealistic for users to provide hundreds of gigabytes of RAM.

To solve this, we are developing "Open-World" technology that partitions virtual space data for cloud storage and sequentially downloads and loads data for neighboring areas as the user moves within the virtual environment. 

Open-world technology is commonly used in consumer video games. Integrating this technology into a 3D simulator allows users to move seamlessly through the simulation without being conscious of data boundaries.
If the 3DCG data is prepared, it is even possible to travel continuously across hundreds of kilometers within Japan.
Shizuoka Open World (North Side of Shizuoka Station)
Shizuoka Open World (North Side of Shizuoka Station)
Mechanism of the Open-World Technology
Mechanism of the Open-World Technology

[Now on YouTube] Touring the Entire Shizuoka Prefecture in an Open-World Simulation

We have released an open-world 3D environment covering the entire Shizuoka Prefecture, built through the advanced processing and refinement of VIRTUAL SHIZUOKA's massive point cloud data. Enjoy a flight over Shizuoka, recreated through high-quality real-time rendering. To keep the experience engaging, the in-simulator time progresses 24 times faster than reality, allowing you to witness the dynamic transition between day and night. 

Note: The flight speed is calculated based on real-time parameters.

Point Cloud Processing Technology

Algorithms for Converting Point Cloud Data to Polygon Data
"Point Cloud" format data is widely used when digitizing the shapes of terrain and buildings in real-world spaces. Point Cloud is a collection of points discretely placed within a 3D coordinate space, representing the surface shapes of the ground and objects. Sensors such as laser scanners and LiDAR are often used to measure real-world spaces, and the output information from these sensors is in point cloud format. By taking photos with an optical camera (ordinary camera) at the same time as the LiDAR, it is possible to assign colors to each point in the point cloud through subsequent alignment processing. Displaying this colored point cloud data in a 3D virtual space provides a high sense of presence. The data from VIRTUAL SHIZUOKA mentioned in the previous section also corresponds to colored point cloud data. However, there are challenges when using this virtual space for simulation purposes. For example, when attempting to run an autonomous robot, the ground and objects are represented by points, meaning the tires of the robot's wheels come into contact with "points" rather than a "surface," making it difficult to replicate realistic physical behavior. Furthermore, point clouds have large data volumes and tend to result in high processing loads. For instance, the application displaying the 3D virtual space becomes heavy. Therefore, we are conducting research and development on algorithms to convert point cloud data into lightweight polygon (polygonal) data. These algorithms are also utilized for processing open data such as VIRTUAL SHIZUOKA. Furthermore, during data conversion, we are not only performing simple replacement processes but also challenging ourselves to develop technologies for noise reduction and for estimating and interpolating missing areas.
Example of conversion between point cloud data and polygon data
Example of conversion between point cloud data and polygon data

3DCG Production for Outdoor Spaces

Also supporting 3DCG production techniques based on manual creation.
While our basic policy is the automatic generation of 3DCG, there are cases where manual 3DCG production is required depending on the application or conditions. This applies to situations where the precision of open data is insufficient, or where actual measurements cannot be taken because equipment installation is still in the planning stage. When creating 3DCG for outdoor spaces, we utilize actual measurement results from open data and other sources. Depending on the purpose of the simulation or visualization, the accuracy of that data may be inadequate, or it may be impossible to take measurements in the first place if equipment is about to be installed on-site. In such cases, we may produce the 3DCG manually. For example, in the research and development of smart agriculture, we produce 3DCG of farms to conduct simulations.
We individually create 3D models of farm terrain, crops such as fruit trees, warehouses for storing agricultural machinery, and telecommunication base stations equipped with solar panels, and then place them within a single virtual space. By conducting simulations of autonomous harvesting robots, we execute algorithms to verify the robot's motion scenarios, leading to increased efficiency in actual machine development. A characteristic of our 3D technology is that while we prioritize automatic generation as a rule, we can also accommodate manual production as needed.
Example of 3DCG production for a farm
Example of 3DCG production for a farm
Datasets Used and Processing Details
The images on this page are created by processing and integrating the following reliable open data.

Topographic Data: VIRTUAL SHIZUOKA © Shizuoka Prefecture, licensed under CC BY 4.0
Source: https://virtualshizuokaproject.my.canva.site/
Topographic Data (Hamamatsu Station): Partially created using processed GSI Digital Japan Basic Map (Orthophoto) data.
Source: https://www.gsi.go.jp/

Licensed under CC BY 4.0:https://creativecommons.org/licenses/by/4.0/